Methods for Monitoring the Psychophysiological State of a Human Operator via Emotions Reflected in Facial Expressions and Analysis of Blinking Characteristics using Deep Convolutional Neural Networks

Authors: Korsun O.N., Yurko V.N. Published: 29.03.2021
Published in issue: #1(134)/2021  
DOI: 10.18698/0236-3933-2021-1-120-134

Category: Informatics, Computer Engineering and Control | Chapter: System Analysis, Control, and Information Processing  
Keywords: convolutional neural networks, emotions, blinking, state of the operator

We analysed two approaches to estimating the state of a human operator according to video imaging of the face. These approaches, both using deep convolutional neural networks, are as follows: 1) automated emotion recognition; 2) analysis of blinking characteristics. The study involved assessing changes in the functional state of a human operator performing a manual landing in a flight simulator. During this process, flight parameters were recorded, and the operator’s face was filmed. Then we used our custom software to perform automated recognition of emotions (blinking), synchronising the emotions (blinking) recognised to the flight parameters recorded. As a result, we detected persistent patterns linking the operator fatigue level to the number of emotions recognised by the neural network. The type of emotion depends on unique psychological characteristics of the operator. Our experiments allow for easily tracing these links when analysing the emotions of "Sadness", "Fear" and "Anger". The study revealed a correlation between blinking properties and piloting accuracy. A higher piloting accuracy meant more blinks recorded, which may be explained by a stable psycho-physiological state leading to confident piloting

The study was supported by Russian Foundation for Basic Research (project RFBR no. 18-08-01142)


[1] Korsun O.N., Yurko V.N. [Assessment of operator state by image of his face based on deep convolution networks]. Sb. dok. XVI Vseros. nauch.-tekh. konf. "Nauchnye chteniya po aviatsii" [Proc. XVI Rus. Sc.-Tech. Conf. "Scientific Readings on Aviation"]. Moscow, VVIA im. Zhukovskogo Publ., 2019, pp. 266--270 (in Russ.).

[2] Akhmetshin R.I., Kirpichnikov A.P., Shleymovich M.P. Recognizing human emotions from images. Vestnik tekhnologicheskogo universiteta [Bulletin of the Technological University], 2015, vol. 18, no. 11, pp. 160--163 (in Russ.).

[3] Mehendale N. Facial emotion recognition using convolutional neural networks (FERC). SN Appl. Sci., 2020, vol. 2, no. 3, art. 446. DOI: https://doi.org/10.1007/s42452-020-2234-1

[4] Gonzalez-Lozoya S.M., de la Calleja J., Pellegrin L., et al. Recognition of facial expressions based on CNN features. Multimed. Tools Appl., 2020, vol. 79, no. 19-20, pp. 13987--14007. DOI: https://doi.org/10.1007/s11042-020-08681-4

[5] Sun M., Tsujikawa M., Onishi Y., et al. A neural-network-based investigation of eye-related movements for accurate drowsiness estimation. Proc. IEEE EMBC, 2018, pp. 5207--5210. DOI: https://doi.org/10.1109/EMBC.2018.8513491

[6] Korsun O.N., Tikhomirova T.A., Mikhaylov E.I. Visual-based monitoring of human operator flight tasks performance. Vestnik komp’yuternykh i informatsionnykh tekhnologiy [Herald of Computer and Information Technologies], 2019, no. 7, pp. 10--19 (in Russ.). DOI: https://doi.org/10.14489/vkit.2019.07.pp.010-019

[7] Polikanova I.S., Leonov S.V. Psychophysiological and molecular genetic correlates of fatigue. Sovremennaya zarubezhnaya psikhologiya [Modern Foreign Psychology], 2016, vol. 5, no. 4, pp. 24--35 (in Russ.).

[8] Korsun O.N., Mikhaylov E.I. Methods of electroencephalogram analysis for the human operator’s condition estimation during the piloting. Cloud of Science, 2018, vol. 5, no. 4, pp. 649--663 (in Russ.).

[9] Lin W., Li C., Sun S. Deep convolutional neural network for emotion recognition using EEG and peripheral physiological signal. In: Zhao Y., Kong X., Taubman D. (eds). Image and Graphics. ICIG 2017. Lecture Notes in Computer Science, vol. 10667. Cham, Springer, 2017, pp. 385--394. DOI: https://doi.org/10.1007/9783319715896_33

[10] Ekman P., Friesen W.V. Unmasking the face: a guide to recognizing emotions from facial expressions. ISHK, 2003.

[11] Challenges in representation learning: facial expression recognition challenge. kaggle.com: website. Available at: http://www.kaggle.com/c/challenges-in-representation-learning-facial-expression-recognition-challenge (accessed: 15.12.2020).

[12] Metod Violy --- Dzhonsa (Viola -- Jones) kak osnova dlya raspoznavaniya lits [Viola --- Jones method as a basis for face recognition]. habr.com: website (in Russ.). Available at: https://habr.com/ru/post/133826 (accessed: 15.12.2020).

[13] Zhang K., Zhang Z., Li Z., et al. Joint face detection and alignment using multitask cascaded convolutional networks. IEEE Signal Processing Letters, 2016, vol. 23, no. 10, pp. 1499--1503. DOI: https://doi.org/10.1109/LSP.2016.2603342

[14] Uchebniki TensorFlow [TensorFlow textbooks]. tensorflow.org: website (in Russ.). Available at: https://www.tensorflow.org/tutorials (accessed: 15.12.2020).

[15] Krizhevsky A., Sutskever I., Hinton G.E. ImageNet classification with deep convolutional neural networks. Advances in Neural Information Processing Systems. Curran Associates, 2012, pp. 1097--1105.

[16] Korsun O.N., Yurko V.N., Mikhaylov E.I. Operator’s state estimation based on the face’s video images analysis using deep convolutional neural networks. IOP Conf. Ser.: Mater. Sci. Eng., 2020, vol. 714, art. 012012.DOI: https://doi.org/10.1088/1757899X/714/1/012012

[17] Simonyan K., Zisserman A. Very deep convolutional networks for largescale image recognition. InICLR, 2015.

[18] Simonyan K., Vedaldi A., Zisserman A. Deep inside convolutional networks: visualising image classification models and saliency maps. arxiv.org: website. Available at: https://arxiv.org/pdf/1312.6034.pdf (accessed: 15.12.2020).

[19] Selvaraju R.R., Das A., Vedantam R., et al. Grad-CAM: why did you say that? Visual explanations from deep networks via gradient-based localization. arxiv.org: website. Available at: https://arxiv.org/pdf/1610.02391v1.pdf (accessed: 15.12.2020).

[20] Yiu Y.-H., Aboulatta M., Raiser T., et al. DeepVOG: open-source pupil segmentation and gaze estimation in neuroscience using deep learning. J. Neurosci. Methods, 2019, vol. 324, art. 108307.DOI: https://doi.org/10.1016/j.jneumeth.2019.05.016

[21] Ronneberger O., Fischer P., Brox T. U-net: convolutional networks for biomedical image segmentation. arxiv.org: website. Available at: https://arxiv.org/pdf/1505.04597v1.pdf (accessed: 15.12.2020).

[22] Prudnikov L.A., Klimov R.S. Potential ability to manage professional operator training based on the assessment of their psychophysiological state. Sovremennoe obrazovanie [Modern Education], 2016, no. 2, pp. 52--64 (in Russ.). DOI: https://doi.org/10.7256/2409-8736.2016.2.17889