“In situ info.” Prototype for Using on Small Unmanned Aerial Vehicle (UAV)
Main Article Content
Abstract
Artificial intelligence (AI) is now being used to analyze images acquired from mounted cameras on unmanned aerial vehicles (UAVs) for an application of delineating addictive crops in highland crop fields, in the valley or in the wilderness. It can also be adopted to process images near real time. This makes it possible to find the desired goals more clearly. This paper elaborates the current study of an AI system installed on a small UAV to detect addictive plants for processing and analyzing camera images immediately after the time of image acquisition. All the results will work on the device mounted on the UAV that was named “In situ info.”. The goal of this system was divided into three parts i.e., the integration of basic AI systems, an AI prototype for detecting addictive plants on a small UAV, and test of the AI prototype on a small UAV in real-world settings. In this article, we describe the initial integration of the AI system. Due to the analysis of data on a small UAV, the criteria for the selection of equipment had to be small and lightweight. This was a limitation in the selection of equipment to be installed on a small UAV. In the AI system studied, a readily available flower modeled as an addictive plant was used to model in TensorFlow version 1.15 through the SSD Mobilenet version 2 architecture by creating a “One class classification” model. It was the system used to process images. This will test the model through the camera system used for installation on the small UAV both prime lens and zoom lens cameras. Embracing the AI system to work with various types of cameras as mentioned above, we obtained results in accordance with the study objectives.
Downloads
Article Details

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Journal of TCI is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0) licence, unless otherwise stated. Please read our Policies page for more information...
References
S. Verykokou, A. Doulamis, G. Athanasiou, C. Ioannidis, and A. Amditis, “UAV-based 3D Modelling of Disaster Scenes for Urban Search and Rescue,” in 2016 IEEE Int. Conf. Imag.Syst.Techn. (IST), Chania, Greece, 2016, pp. 106 - 111, doi: 10.1109/IST.2016.7738206.
R. Kalita, A. K. Talukdar, and K. K. Sarma, “Real-Time Human Detection with Thermal Camera Feed using YOLOv3,” in 2020 IEEE 17th India Council Int. Conf. (INDICON), New Delhi, India, 2020, pp.1-5, doi: 10.1109/ INDICON49873.2020.9342089.
C. Hsu, F. Chen, and G. Wang, “High-Resolution Image Inpainting through Multiple Deep Networks,” in 2017 Int. Conf. Vision, Image Signal Process. (ICVISP), Osaka, Japan, 2017, pp. 76 - 81, doi: 10.1109/ICVISP.2017.27.
J. Zhou, Y. Tian, K. Yin, G. Yang, and M. Wen, “Improved UAV Opium Poppy Detection Using an Updated YOLOv3 Model,” Sensor, vol. 19, no. 22, p.4851, 2019.
R. Alshanbari, S. Khan, N. El-Atab, and M. M. Hussain, “AI Powered Unmanned Aerial Vehicle for Payload Transport Application,” in 2019 IEEE Nat. Aerosp. Electron. Conf. (NAECON), Dayton, OH, USA, 2019, pp. 420 - 424, doi: 10.1109/NAECON 46414.2019.9058320.
Y. Gao, T. Huang, Z. Lin, B. Xu and D. Xu, “A Real-time Chinese Speech Recognition System with Unlimited Vocabulary,”in 1991 Int. Conf. Acoust. Speech, Signal Process. Toronto, Ontario, Canada, 1991, pp. 257 - 260, doi: 10.1109/ICASSP.1991.150326.
J. A. Paredes, J. González, C. Saito and A. Flores, “Multispectral Imaging System with UAV Integration Capabilities for Crop Analysis,” in 2017 1st IEEE Int.Symp. Geosci. Remote Sens.(GRSS-CHILE), Valdivia, Chile, 2017, pp. 1 - 4, doi: 10.1109/GRSS-CHILE.2017.7996009.
T. K. Hazra, D. P. Singh, and N. Daga, “Optical Character Recognition Using KNN on Custom Image Dataset,” in 2017 8th Annu. Ind. Automat. Electromechanical Eng. Conf. (IEMECON), Bangkok, Thailand, 2017, pp. 110 - 114, doi: 10.1109/IEMECON.2017.8079572.