|

Camera Trap Data Analysis System for Operational Remote Monitoring of the Natural Areas

Authors: Efremov Vl.A., Leus A.V., Gavrilov D.A., Mangazeev D.I., Kholodnyak I.V., Radysh A.S., Zuev V.A., Vodichev N.A., Parshikov M.M. Published: 22.01.2024
Published in issue: #4(145)/2023  
DOI: 10.18698/0236-3933-2023-4-85-109

 
Category: Informatics, Computer Engineering and Control | Chapter: System Analysis, Control, and Information Processing  
Keywords: camera trap images, agglomerative clustering, deep convolutional neural networks, detection, classification, two-stage approach, registrations

Abstract

The paper presents a system to analyze data from camera traps for operational remote monitoring of the natural areas based on the two-stage neural network image processing system containing the server and user components. The server component is designed to process a large amount of data received from different reserves to learn the neural network algorithms. The user component is required for installation in the local reserve computer. The developed system makes it possible to significantly reduce the data processing time from camera traps and simplifies the ecological analysis. Ability to relearn the classifier for species diversity in any reserve without the detector relearning improves recognition quality of the animal species within one specially protected natural area, which makes the system more flexible and scalable. To adjust the algorithm quantitative and qualitative predictions, software is supplemented with a functionality that makes it possible to automatically create the so-called registrations. Registrations are used to count the number of objects in each photograph taking into account contextual information from the image sequence. Registrations allow adjusting the neural network predictions not only by the number of animals in the photo, but also by the predicted classes. System speeds were compared on various hardware platforms. It is shown that introduction of the advanced graphic computers makes it possible to process images at a rate significantly exceeding human capabilities

Please cite this article in English as:

Efremov V.A., Leus A.V., Gavrilov D.A., et al. Camera trap data analysis system for operational remote monitoring of the natural areas. Herald of the Bauman Moscow State Technical University, Series Instrument Engineering, 2023, no. 4 (145), pp. 85--109 (in Russ.). DOI: https://doi.org/10.18698/0236-3933-2023-4-85-109

References

[1] O’Connell A.F., Nichols J.D., Karanth K.U. Camera traps in animal ecology. Berlin, Springer Science & Business Media, 2011. DOI: https://doi.org/10.1007/978-4-431-99495-4

[2] Zheltukhina Yu.S., Ogurtsov S.S., Volkova K.A., et al. [The use of camera traps in the environmental education of school children and students in the central forest reserve]. Biologicheskoe i ekologicheskoe obrazovanie studentov i shkolnikov: aktualnye problemy i puti ikh resheniya. Mater. IV Mezhdunar. nauch.-prakt. konf. [Biological and Ecological Education of Students and Schoolchildren: Current Problems and Ways to Solve Them. Proc. IV Int. Sc.-Pract. Conf.]. Samara, SGSPU Publ., 2018, pp. 98--110 (in Russ.).

[3] Ogurtsov S.S., Volkov V.P., Zheltukhin A.S. Review of some actual methods of storage, processing and analysis of data from camera traps in zoological research. Nature Conservation Research. Zapovednaya nauka [Nature Conservation Research], 2017, no. 2, pp. 73--98 (in Russ.).

[4] Alpeev M.A., Artaev O.N., Vargot E.V., et al. The first results of the camera trap use in the Mordovia State Nature Reserve. Trudy Mordovskogo gosudarstvennogo prirodnogo zapovednika imeni P.G. Smidovicha [Proceedings of the Mordovia State Nature Reserve], 2018, no. 20, pp. 3--14 (in Russ.).

[5] Gavrilov D.A., Shchelkunov N.N. Software for large format aerospace image marking and training samples preparation. Nauchnoe priborostroenie, 2020, vol. 30, no. 2, pp. 67--75 (in Russ.). DOI: https://doi.org/10.18358/np-30-2-i6775

[6] Gavrilov D.A. Investigation of the applicability of the convolutional neural network U-Net to a problem of segmentation of aircraft images. Kompyuternaya optika [Computer Optics], 2021, vol. 45, no. 4, pp. 575--579 (in Russ.). DOI: https://doi.org/10.18287/2412-6179-CO-804

[7] Leus A.V., Efremov V.A. Computer vision methods application for camera traps image analysis within the software for the reserves environmental state monitoring. Trudy Mordovskogo gosudarstvennogo prirodnogo zapovednika imeni P.G. Smidovicha [Proceedings of the Mordovia State Nature Reserve], 2021, no. 28, pp. 121--129 (in Russ.).

[8] Chen G., Han T.X., He Z., et al. Deep convolutional neural network based species recognition for wild animal monitoring. IEEE ICIP, 2014, pp. 858--862. DOI: https://doi.org/10.1109/ICIP.2014.7025172

[9] Yousif H., Yuan J., Kays R., et al. Animal Scanner: software for classifying humans, animals, and empty frames in camera trap images. Ecol. Evol., 2019, vol. 9, no. 4, pp. 1578--1589. DOI: https://doi.org/10.1002/ece3.4747

[10] Yousif H., Yuan J., Kays R., et al. Fast human-animal detection from highly cluttered camera-trap images using joint background modeling and deep learning classification. IEEE ISCAS, 2017.DOI: https://doi.org/10.1109/ISCAS.2017.8050762

[11] He K., Zhang X., Ren S., et al. Deep residual learning for image recognition. IEEE CVPR, 2016, pp. 770--778. DOI: https://doi.org/10.1109/CVPR.2016.90

[12] Gomez-Villa A., Salazar A., Vargas F. Towards automatic wild animal monitoring: identification of animal species in camera-trap images using very deep convolutional neural networks. Ecol. Inform., 2017, vol. 41, pp. 24--32. DOI: https://doi.org/10.1016/j.ecoinf.2017.07.004

[13] Norouzzadeh M.S., Nguyen A., Kosmala M., et al. Automatically identifying, counting, and describing wild animals in camera-trap images with deep learning. PNAS, 2018, vol. 115, no. 25, pp. 5716--5725. DOI: https://doi.org/10.1073/pnas.1719367115

[14] Yu X., Wang J., Kays R., et al. Automated identification of animal species in camera trap images. EURASIP J. Image Video Proc., 2013, vol. 2013, no. 1, art. 52. DOI: https://doi.org/10.1186/1687-5281-2013-52

[15] Tabak M.A., Norouzzadeh M.S., Wolfson D.W., et al. Machine learning to classify animal species in camera trap images: applications in ecology. Methods Ecol. Evol., 2010, vol. 10, no. 4, pp. 585--590. DOI: https://doi.org/10.1111/2041-210X.13120

[16] Schneider S., Taylor G.W., Kremer S. Deep learning object detection methods for ecological camera trap data. IEEE CRV, 2018, pp. 321--328. DOI: https://doi.org/10.1109/CRV.2018.00052

[17] Beery S., Van Horn G., Perona P. Recognition in terra incognita. In: ECCV 2018. Cham, Springer Nature, 2018, pp. 472--489. DOI: https://doi.org/10.1007/978-3-030-01270-0_28

[18] Norouzzadeh M.S., Morris D., Beery S., et al. A deep active learning system for species identification and counting in camera trap images. Methods Ecol. Evol., 2021, vol. 12, no. 1, pp. 150--161. DOI: https://doi.org/10.1111/2041-210X.13504

[19] Whytock R.C., Swiezewski J., Zwerts J.A. Robust ecological analysis of camera trap data labelled by a machine learning model. Methods Ecol. Evol., 2021, vol. 12, no. 6, pp. 1080--1092. DOI: https://doi.org/10.1111/2041-210X.13576

[20] Glenn J. YOLOv5 release v6.1. github.com: website. Available at: https://github.com/ultralytics/yolov5/releases/tag/v6.1 (accessed: 15.08.2023).

[21] Wang C., Yeh I., Liao H.M. You only learn one representation: unified network for multiple tasks. arXiv:2105.04206. DOI: https://doi.org/10.48550/arXiv.2105.04206

[22] Ge Z., Liu S., Wang F., et al. YOLOX: exceeding YOLO series in 2021. arXiv:2107.08430. DOI: https://doi.org/10.48550/arXiv.2107.08430

[23] Lin T.Y., Maire M., Belongie S., et al. Microsoft COCO: common objects in context. In: EССV 2014. Cham, Springer Nature, 2014, pp. 740--755. DOI: https://doi.org/10.1007/978-3-319-10602-1_48

[24] Hu J., Shen L., Albanie S., et al. Squeeze-and-excitation networks. IEEE Trans. Pattern Anal. Mach. Intell., 2020, vol. 42, no. 8, pp. 2011--2023. DOI: https://doi.org/10.1109/TPAMI.2019.2913372

[25] Zhang H., Wu C., Zhang Z., et al. ResNeSt: split-attention networks. arXiv:2004.08955. DOI: https://doi.org/10.48550/arXiv.2004.08955

[26] Han D., Yun S., Heo B., et al. ReXNet: diminishing representational bottleneck on convolutional neural network. arXiv:2007.00992. DOI: https://doi.org/10.48550/arXiv.2007.00992

[27] Willi M., Pitman R.T., Cardoso A.W., et al. Identifying animal species in camera trap images using deep learning and citizen science. Methods Ecol. Evol., 2019, vol. 10, no. 1, pp. 80--91. DOI: https://doi.org/10.1111/2041-210X.13099

[28] Tan M., Le Q.V. EfficientNetV2: smaller models and faster training. arXiv:2104.00298. DOI: https://doi.org/10.48550/arXiv.2104.00298

[29] Tan M., Le Q.V. EfficientNet: rethinking model scaling for convolutional neural networks. arXiv:1905.11946. DOI: https://doi.org/10.48550/arXiv.1905.11946

[30] Tan M., Chen B., Pang R., et al. MnasNet: platform-aware neural architecture search for mobile. IEEE/CVF CVPR, 2019, pp. 2815--2823. DOI: https://doi.org/10.1109/CVPR.2019.00293

[31] Szegedy C., Vanhoucke V., Ioffe S., et al. Rethinking the inception architecture for computer vision. arXiv:1512.00567. DOI: https://doi.org/10.48550/arXiv.1512.00567

[32] Sibson R. SLINK: an optimally efficient algorithm for the single-link cluster method. Comput. J., 1973, vol. 16, no. 1, pp. 30--34. DOI: https://doi.org/10.1093/comjnl/16.1.30