Automatic Synthesis of a Continuous Dynamic Stabilization System based on Artificial Neural Networks

Authors: Dotsenko A.V. Published: 10.09.2020
Published in issue: #3(132)/2020  
DOI: 10.18698/0236-3933-2020-3-66-83

Category: Informatics, Computer Engineering and Control | Chapter: Mathematical Modelling, Numerical Methods, and Program Complexes  
Keywords: perceptron, control synthesis, genetic algorithm, supervised learning, dynamic system

The article considers an automated process of synthesising a continuous dynamic stabilization system based on multilayer neural networks. We propose a two-stage solution to the synthesis problem. The first stage involved generating the training dataset. It meant solving the optimum control problem for the dynamic system under consideration multiple times with various initial conditions. We derived optimum control for every initial condition as a function of object states. The second stage involved approximation of the training dataset we generated, which used a multilayer perceptron neural network. This perceptron represents a control unit that covers the entire initial condition space of the control object. The innovative aspect of our work is as follows: we reduced the synthesis problem to the approximation problem, that is, we sought a closed stabilising system by means of approximating the optimum controls obtained while minimising a functional of the integral type. The paper presents a numerical experiment in synthesising a stabilisation system for a track-laying robot. In order to test the stabilisation system synthesised, we consider initial conditions outside the training dataset and show how the system operates when externally perturbed


[1] Kolesnikov A.A., Kolesnikov A.A., Kuz’menko A. The ADAR method and theory of optimal control in the problems of synthesis of nonlinear control systems. Mekhatronika, avtomatizatsiya, upravlenie, 2016, vol. 17, no. 10, pp. 657--669 (in Russ.). DOI: https://doi.org/10.17587/mau.17.657-669

[2] Kolesnikov A.A. Synergetic control theory: conceptions, methods, development tendencies. Izvestiya TRTU, 2001, vol. 23, no. 5, pp. 7--27 (in Russ.).

[3] Khalil H.K. Nonlinear systems. Prentice Hall, 2002.

[4] Gen K., Chulin N.A. Stabilization and control algorithms for quadcopter flight. Molodezhnyy nauchno-tekhnicheskiy vestnik, 2014, no. 11 (in Russ.). Available at: http://masters.donntu.org/2017/etf/shichanin/library/pdf_5.pdf

[5] Gen K., Chulin N.A. Stabilization algorithms for automatic control of the trajectory movement of quadcopter. Nauka i obrazovanie: nauchnoe izdanie MGTU im. N.E. Baumana [Science and Education: Scientific Publication], 2015, no. 5 (in Russ.). DOI: http://dx.doi.org/10.7463/0515.0771076

[6] Diveev A., Shmalko E., Sofronova E. Multipoint numerical criterion for manifolds to guarantee attractor properties in the problem of synergetic control design. ITM Web Conf., 2018, vol. 18, art. 01001. DOI: https://doi.org/10.1051/itmconf/20181801001

[7] Diveev A., Kazaryan D., Sofronova E. Symbolic regression methods for control system synthesis. 22nd Mediterranean Conf. Control and Automation, 2014, pp. 587--592. DOI: https://doi.org/10.1109/MED.2014.6961436

[8] Kazaryan D.E., Savinkov A.V. Grammatical evolution for neural network optimization in the control system synthesis problem. Procedia Comput. Sci., 2017, vol. 103, no. C, pp. 14--19. DOI: https://doi.org/10.1016/j.procs.2017.01.002

[9] Gladkov L.A., Kureychik V.V., Kureychik V.M. Geneticheskie algoritmy [Genetic algorithms]. Moscow, FIZMATLIT Publ., 2010.

[10] Hornik K., Stinchcombe M., White H. Multilayer feedforward networks are universal approximators. Neural Netw., 1989, vol. 2, iss. 5, pp. 359--366. DOI: https://doi.org/10.1016/08936080(89)900208

[11] Goodfellow I., Bengio Y., Courville A. Deep learning. MIT press, 2016.

[12] Dubins L.E. On curves of minimal length with a constraint on average curvature and with prescribed initial and terminal positions and tangents. Amer. J. Math., 1957, vol. 79, pp. 497--516. DOI: https://doi.org/10.2307/2372560

[13] Li M., Zhang T., Chen Y., et al. Efficient minibatch training for stochastic optimization. Proc. 20th ACM SIGKDD Int. Conf. Knowledge Discovery and Data Mining, 2014, pp. 661--670. DOI: https://doi.org/10.1145/2623330.2623612

[14] Keras: website. Available at: https://keras.io (accessed: 15.03.2020).

[15] Kingma D.P., Ba J. Adam: a method for stochastic optimization. arxiv.org: website. Available at: https://arxiv.org/abs/1412.6980 (accessed: 15.03.2020).

[16] Glorot X., Bordes A., Bengio Y. Deep sparse rectifier neural networks. Proc. 14th Int. Conf. Artificial Intelligence and Statistics, 2011, pp. 315--323. Available at: http://proceedings.mlr.press/v15/glorot11a/glorot11a.pdf (accessed: 15.03.2020).

[17] Nair V., Hinton G.E. Rectified linear units improve restricted Boltzmann machines. Proc. 27th ICML-10, 2010, pp. 807--814. Available at: http://www.cs.toronto.edu/~hinton/absps/reluICML.pdf (accessed: 15.03.2020).

[18] Glorot X., Bengio Y. Understanding the difficulty of training deep feedforward neural networks. Proc. 13th Int. Conf. Artificial Intelligence and Statistics, 2010, pp. 249--256. Available at: http://proceedings.mlr.press/v9/glorot10a/glorot10a.pdf (accessed: 15.03.2020).

[19] Ioffe S., Szegedy C. Batch normalization: accelerating deep network training by reducing internal covariate shift. arxiv.org: website. Available at: https://arxiv.org/abs/1502.03167v3 (accessed: 15.03.2020).