|

How to Trick a Neural Network? Synthesising Noise to Reduce the Accuracy of Neural Network Image Classification

Authors: Karpenko A.P., Ovchinnikov Vad.A. Published: 29.03.2021
Published in issue: #1(134)/2021  
DOI: 10.18698/0236-3933-2021-1-102-119

 
Category: Informatics, Computer Engineering and Control | Chapter: System Analysis, Control, and Information Processing  
Keywords: deep neural network, image classification, attack noise synthesis, graphics accelerator

The study aims to develop an algorithm and then software to synthesise noise that could be used to attack deep learning neural networks designed to classify images. We present the results of our analysis of methods for conducting this type of attacks. The synthesis of attack noise is stated as a problem of multidimensional constrained optimization. The main features of the attack noise synthesis algorithm proposed are as follows: we employ the clip function to take constraints on noise into account; we use the top-1 and top-5 classification error ratings as attack noise efficiency criteria; we train our neural networks using backpropagation and Adam's gradient descent algorithm; stochastic gradient descent is employed to solve the optimisation problem indicated above; neural network training also makes use of the augmentation technique. The software was developed in Python using the Pytorch framework to dynamically differentiate the calculation graph and runs under Ubuntu 18.04 and CentOS 7. Our IDE was Visual Studio Code. We accelerated the computation via CUDA executed on a NVIDIA Titan XP GPU. The paper presents the results of a broad computational experiment in synthesising non-universal and universal attack noise types for eight deep neural networks. We show that the attack algorithm proposed is able to increase the neural network error by eight times

References

[1] Tian X., Zhang J., Ma Z., et al. Deep LSTM for large vocabulary continuous speech recognition. arxiv.org: website. Available at: https://arxiv.org/pdf/1703.07090.pdf (accessed: 15.12.2020).

[2] Shen J., Pang R., Weiss R.J., et al. Natural TTS synthesis by conditioning WaveNet on Mel spectrogram predictions. arxiv.org: website. Available at: https://arxiv.org/pdf/1712.05884.pdf (accessed: 15.12.2020).

[3] Krizhevsky A., Sutskever I., Hinton G.E. ImageNet classification with deep convolutional neural networks. 25th Int. Conf. Neural Information Processing Systems. Curran Associates, 2012, pp. 1097--1105.

[4] Rozsa A., Gunther M., Rudd E.M., et al. Facial attributes: accuracy and adversarial robustness. Pattern Recognit. Lett., 2019, vol. 124, pp. 100--108. DOI: https://doi.org/10.1016/j.patrec.2017.10.024

[5] Eykholt K., Evtimov I., Fernandes E., et al. Robust physical-world attacks on deep learning models. arxiv.org: website. Available at: https://arxiv.org/pdf/1707.08945.pdf (accessed: 15.12.2020).

[6] Lecun Y. Gradient-based learning applied to document recognition. Proc. IEEE, 1998, vol. 86, iss. 11, pp. 2278--2324. DOI: https://doi.org/10.1109/5.726791

[7] Leсun Y., Cortes C., Burges C. MNIST handwritten digit database. yann.lecun.com: website. Available at: http://yann.lecun.com/exdb/mnist (accessed: 15.12.2020).

[8] Janocha K., Czarnecki W.M. On loss functions for deep neural networks in classification. arxiv.org: website. Available at: https://arxiv.org/pdf/1702.05659.pdf (accessed: 15.12.2020).

[9] van den Oord A., Dieleman S., Zen H., et al. WaveNet: a generative model for raw. arxiv.org: website. Available at: https://arxiv.org/pdf/1609.03499.pdf (accessed: 15.12.2020).

[10] Rozsa A., Gunther M., Rudd E.M., et al. Are facial attributes adversarially robust? arxiv.org: website. Available at: https://arxiv.org/pdf/1605.05411.pdf1 (accessed: 15.12.2020).

[11] Kurakin A., Goodfellow I., Bengio S. Adversarial examples in the physical world. arxiv.org: website. Available at: https://arxiv.org/pdf/1607.02533.pdf (accessed: 15.12.2020).

[12] Goodfellow I.J., Shlens J., Szegedy C. Explaining and harnessing adversarial examples. arxiv.org: website. Available at: https://arxiv.org/pdf/1412.6572.pdf (accessed: 15.12.2020).

[13] Moosavi-Dezfooli S.M., Fawzi A., Fawzi O., et al. Universal adversarial perturbations. arxiv.org: website. Available at: https://arxiv.org/pdf/1610.08401.pdf (accessed: 15.12.2020).

[14] Szegedy C., Zaremba W., Sutskever I., et al. Intriguing properties of neural networks. arxiv.org: website. Available at: https://arxiv.org/pdf/1312.6199.pdf (accessed: 15.12.2020).

[15] Kurakin A., Goodfellow I., Bengio S. Adversarial machine learning at scale. arxiv.org: website. Available at: https://arxiv.org/pdf/1611.01236.pdf (accessed: 15.12.2020).

[16] Miyato T., Maeda S., Koyama M., et al. Virtual adversarial training: a regularization method for supervised and semi-supervised learning. arxiv.org: website. Available at: https://arxiv.org/pdf/1704.03976.pdf (accessed: 15.12.2020).

[17] Moosavi-Dezfooli S.M., Fawzi A., Frossard P. DeepFool: a simple and accurate method to fool deep neural networks. arxiv.org: website. Available at: https://arxiv.org/pdf/1511.04599.pdf (accessed: 15.12.2020).

[18] Su J., Vargas D.V., Kouichi S. One pixel attack for fooling deep neural networks. arxiv.org: website. Available at: https://arxiv.org/pdf/1710.08864.pdf (accessed: 15.12.2020).

[19] Das S., Suganthan P.N. Differential evolution: a survey of the state-of-the-art. IEEE Trans. Evol. Comput., 2011, vol. 15, no. 1, pp. 4--31. DOI: https://doi.org/10.1109/TEVC.2010.2059031

[20] Russakovsky O., Deng J., Su H., et al. ImageNet large scale visual recognition challenge. Int. J. Comput. Vis., 2015, vol. 115, no. 3, pp. 211--252. DOI: https://doi.org/10.1007/s11263-015-0816-y

[21] Kingma D.P., Ba J. Adam: a method for stochastic optimization. arxiv.org: website. Available at: https://arxiv.org/pdf/1412.6980.pdf (accessed: 15.12.2020).

[22] Ruder S. An overview of gradient descent optimization algorithms. arxiv.org: website. Available at: https://arxiv.org/pdf/1609.04747.pdf (accessed: 15.12.2020).

[23] Simonyan K., Zisserman A. Very deep convolutional networks for large-scale image recognition. arxiv.org: website. Available at: https://arxiv.org/pdf/1409.1556.pdf (accessed: 15.12.2020).

[24] He K., Zhang X., Ren Sh., et al. Deep residual learning for image recognition. arxiv.org: website. Available at: https://arxiv.org/pdf/1512.03385.pdf (accessed: 15.12.2020).

[25] Huang G., Liu Z., van der Maaten L., et al. Densely connected convolutional networks. arxiv.org: website. Available at: https://arxiv.org/pdf/1608.06993.pdf (accessed: 15.12.2020).

[26] Szegedy C., Liu W., Jia Y., et al. Going deeper with convolutions. arxiv.org: website. Available at: https://arxiv.org/pdf/1409.4842.pdf (accessed: 15.12.2020).

[27] Sandler M., Howard A., Zhu M., et al. MobileNetV2: inverted residuals and linear bottlenecks. arxiv.org: website. Available at: https://arxiv.org/pdf/1801.04381.pdf (accessed: 15.12.2020).

[28] Torchvision.models. pytorch.org: website. Available at: https://pytorch.org/docs/stable/torchvision/models.html (accessed: 15.12.2020).