Major nocturnal Pest classification model using Faster RCNN architecture of Deep learning
DOI:
https://doi.org/10.32628/IJSRST2196174Keywords:
Convolutional Neural Network, CNN, Deep Learning, Pest Detection, Pest Detection, Faster RCNNAbstract
Agriculture research improves the quality and quantity of crops, but pests degrade them. Pesticides are used to prevent these pests from reproducing. However, excessive pesticide use is extremely detrimental to both production and the environment. As a result, initial pest detection is required. We analyzed the most frequently used methodologies in order to determine the most appropriate technique for the first diagnosis and early detection of significant nocturnal flying pests such as White Grub, Helicoverpa, and Spodoptera. We identified and analyzed three frequently used deep learning meta-architectures (Faster R-CNN, SSD Inception, and SSD Mobilenet) for object detection using a small Pest dataset. The faster RCNN meta-architecture outperforms other meta-architectures. To address the issue of class imbalance, we used image augmentation with a Faster RCNN meta-architecture. The proposed work demonstrates how to classify Nocturnal Pests using a Faster RCNN of Deep Learning with a better accuracy performance on a limited dataset and utilization as decision-making tool based on classified results.
References
- K. K. Shukla, D. J. Patel, and B. L. Radadiya, “Role of Information Technology in Improvement of Current Scenario in Agriculture,” Peer Rev. Res. J. Publ. By, vol. 7, no. 3, pp. 390–395, 2014.
- U. K. Behera, S. K. Rautaray, A. K. Choudhary, and D. S. Rana, “Integrated farming system research in India: an overview,” Div. Agron., no. January 2013, pp. 40–78, 2013.
- D. J. Patel and N. Bhatt, “Analytical Review of Major Nocturnal Pests ’ Detection Technique using Computer Vision,” Orient. J. Comput. Sci. Technol., vol. 11, no. 3, pp. 1–4, 2018, doi: 10.13005/ojcst11.03.06.
- R. Arora and S. Sandhu, “Breeding Pest resistant crops for sustainable agriculture,” Breed. Pest Resist. Crop. Sustain. Agric., pp. 1–421, 2017, doi: 10.1007/978-981-10-6056-4.
- Rastogi, R. Arora, and S. Sharma, “Leaf disease detection and grading using computer vision technology & fuzzy logic,” in 2nd International Conference on Signal Processing and Integrated Networks, SPIN 2015, 2015, pp. 500–505, doi: 10.1109/SPIN.2015.7095350.
- O. Russakovsky et al., “ImageNet Large Scale Visual Recognition Challenge,” Int. J. Comput. Vis., vol. 115, no. 3, pp. 211–252, 2015, doi: 10.1007/s11263-015-0816-y.
- D. Fleet, T. Pajdla, B. Schiele, and T. Tuytelaars, Computer Vision – ECCV 2014 - Part VII, vol. 8695, no. Chapter 11. 2014.
- B. D. J. Hand and N. M. Adams, “Data mining,” Pc Ai, vol. 15, no. 6, p. 11, 2001, doi: 10.1002/9781118445112.stat06466.pub2.
- N. K. Manaswi, “Deep Learning with Applications Using Python,” pp. 97–104, 2018, doi: 10.1007/978-1-4842-3516-4.
- Kamilaris and F. X. Prenafeta-Boldú, “A review of the use of convolutional neural networks in agriculture,” J. Agric. Sci., vol. 156, no. 3, pp. 1–11, 2018, doi: 10.1017/S0021859618000436.
- Papageorgiou, “A Trainable System for Object Detection in Images and Video Sequences,” Int. J. Comput. Vis., vol. 38, no. 1685, pp. 15–33, 2000, doi: 10.1023/A:1008162616689.
- H. Bay, A. Ess, T. Tuytelaars, and L. Van Gool, “Speeded-Up Robust Features (SURF),” Comput. Vis. Image Underst., vol. 110, no. 3, pp. 346–359, 2008, doi: 10.1016/j.cviu.2007.09.014.
- Z. Cai, Q. Fan, R. S. Feris, and N. Vasconcelos, “A Unified Multi-scale Deep Convolutional Neural Network for Fast Object Detection,” in Computer Vision – ECCV 2016. Lecture Notes in Computer Science, 2016, vol. 9908, pp. 354–370, doi: 10.1007/978-3-319-46493-0.
- X. Chen, S. Xiang, C. L. Liu, and C. H. Pan, “Vehicle detection in satellite images by hybrid deep convolutional neural networks,” IEEE Geosci. Remote Sens. Lett., vol. 11, no. 10, pp. 1797–1801, 2014, doi: 10.1109/LGRS.2014.2309695.
- M. Eisenbach, D. Seichter, T. Wengefeld, and H. M. Gross, “Cooperative multi-scale Convolutional Neural Networks for person detection,” Proc. Int. Jt. Conf. Neural Networks, vol. 2016-Octob, pp. 267–276, 2016, doi: 10.1109/IJCNN.2016.7727208.
- S. Cui, P. Ling, H. Zhu, and H. M. Keener, “Plant pest detection using an artificial nose system: A review,” Sensors (Switzerland), vol. 18, no. 2, pp. 1–18, 2018, doi: 10.3390/s18020378.
- P. Boissard, V. Martin, and S. Moisan, “A cognitive vision approach to early pest detection in greenhouse crops,” Comput. Electron. Agric., vol. 62, no. 2, pp. 81–93, 2008, doi: 10.1016/j.compag.2007.11.009.
- L. Liu et al., “PestNet: An End-to-End Deep Learning Approach for Large-Scale Multi-Class Pest Detection and Classification,” IEEE Access, vol. 7, pp. 45301–45312, 2019, doi: 10.1109/ACCESS.2019.2909522.
- D. Moshou, C. Bravo, J. West, S. Wahlen, A. McCartney, and H. Ramon, “Automatic detection of ‘yellow rust’ in wheat using reflectance measurements and neural networks,” Comput. Electron. Agric., vol. 44, no. 3, pp. 173–188, 2004, doi: 10.1016/j.compag.2004.04.003.
- S. Coulibaly, B. Kamsu-foguem, and D. Kamissoko, “Computers in Industry Deep neural networks with transfer learning in millet crop images,” Comput. Ind., vol. 108, pp. 115–120, 2019, doi: 10.1016/j.compind.2019.02.003.
- M. Yazdi and T. Bouwmans, “New trends on moving object detection in video images captured by a moving camera: A survey,” Comput. Sci. Rev., vol. 28, pp. 157–177, 2018, doi: 10.1016/j.cosrev.2018.03.001.
- Z. Q. Zhao, P. Zheng, S. T. Xu, and X. Wu, “Object Detection With Deep Learning: A Review,” IEEE Trans. Neural Networks Learn. Syst., vol. 30, no. 11, pp. 3212–3232, 2019, doi: 10.1109/TNNLS.2018.2876865.
- E. Karami, S. Prasad, and M. Shehata, “Image Matching Using SIFT , SURF , BRIEF and ORB : Performance Comparison for Distorted Images Image Matching Using SIFT , SURF , BRIEF and ORB : Performance Comparison for Distorted Images,” in Newfoundland Electrical and Computer Engineering Conference, 2015, no. November, p. 4, doi: 10.13140/RG.2.1.1558.3762.
- D. J. Patel and K. K. Shukla, “Challenges and Opportunities for ICT Initiatives in Agricultural Marketing in India,” Orient. J. Comput. Sci. Technol., vol. 7, no. 3, pp. 377–381, 2014.
- Z. Wang, K. Liu, J. Li, Y. Zhu, and Y. Zhang, “Various Frameworks and Libraries of Machine Learning and Deep Learning: A Survey,” Arch. Comput. Methods Eng., 2019, doi: 10.1007/s11831-018-09312-w.
- D. Hubel and T. Wiesel, “RECEPTIVE FIELDS OF SINGLE NEURONES IN THE CAT ’ S STRIATE CORTEX By D . H . HUBEL * AND T . N . WIESEL * From the Wilmer Institute , The Johns Hopkins Hospital and In the central nervous system the visual pathway from retina to striate cortex provides an,” J. Physiol., vol. 148(3), 57, no. 12, pp. 574–591, 1959, Online]. Available: http://doi.wiley.com/10.1113/jphysiol.2009.174151.
- C. C. Aggarwal, Neural Networks and Deep Learning. 2018.
- P. Agrawal, R. Girshick, and J. Malik, “Analyzing the performance of multilayer neural networks for object recognition,” Lect. Notes Comput. Sci. (including Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinformatics), vol. 8695 LNCS, no. PART 7, pp. 329–344, 2014, doi: 10.1007/978-3-319-10584-0_22.
- K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” 3rd Int. Conf. Learn. Represent. ICLR 2015 - Conf. Track Proc., pp. 1–14, 2015.
- S. Goswami, “A deeper look at how Faster-RCNN works,” medium.com, 2018. https://whatdhack.medium.com/a-deeper-look-at-how-faster-rcnn-works-84081284e1cd.
- C. Ning, H. Zhou, Y. Song, and J. Tang, “Inception Single Shot MultiBox Detector for object detection,” in IEEE International Conference on Multimedia and Expo Workshops (ICMEW), 2017, no. July, pp. 549–554.
- G. Howard et al., “MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications,” 2017, Online]. Available: http://arxiv.org/abs/1704.04861.
- Y. C. Chiu, C. Y. Tsai, M. Da Ruan, G. Y. Shen, and T. T. Lee, “Mobilenet-SSDv2: An Improved Object Detection Model for Embedded Systems,” in 2020 International Conference on System Science and Engineering, ICSSE 2020, 2020, pp. 0–4, doi: 10.1109/ICSSE50014.2020.9219319.
- J. V. Dillon et al., “TensorFlow Distributions,” 2017, Online]. Available: http://arxiv.org/abs/1711.10604.
- Github.com/keras-team/keras, “About Keras,” keras.io, 2015. https://keras.io/about/.
- TzuTa, “LabelImg,” Github, 2015. https://github.com/tzutalin/labelImg.
- Tensorflow, “Tutorial Tensorflow,” Official Webpage, 2017. https://www.tensorflow.org/tutorials.
- Morgan, “TensorFlow: How to freeze a model and serve it with a python API,” blog.metaflow.fr, 2016. https://blog.metaflow.fr/tensorflow-how-to-freeze-a-model-and-serve-it-with-a-python-api-d4f3596b3adc.
- J. L. Leevy, T. M. Khoshgoftaar, R. A. Bauder, and N. Seliya, “A survey on addressing high-class imbalance in big data,” J. Big Data, vol. 5, no. 1, 2018, doi: 10.1186/s40537-018-0151-6.
- C. Shorten and T. M. Khoshgoftaar, “A survey on Image Data Augmentation for Deep Learning,” J. Big Data, vol. 6, no. 1, 2019, doi: 10.1186/s40537-019-0197-0.
- D. J. Patel and N. Bhatt, “Pest Identification Among Deep Learning ’ s Meta-architectures Using TensorFlow,” Int. J. Eng. Adv. Technol., vol. 9, no. 1, pp. 1910–1914, 2019, doi: 10.35940/ijeat.A1031.109119.
- K. G. Bapatla, S. K. Singh, V. Sengottaiyan, B. Singh, and N. P. Singh, “Seasonal patterns of Pest pest in major pigeonpea and chickpea growing agro-climatic zones of India and their management inferences,” Int. J. Trop. Pest Sci., vol. 41, no. 2, pp. 1601–1609, 2021, doi: 10.1007/s42690-020-00361-y.
- D. Patel and N. Bhatt, “Improved accuracy of pest detection using augmentation approach with Faster R-CNN,” in IOP Conference Series: Materials Science and Engineering, 2021, vol. 1042, no. 1, p. 012020, doi: 10.1088/1757-899x/1042/1/012020.
Downloads
Published
Issue
Section
License
Copyright (c) IJSRST

This work is licensed under a Creative Commons Attribution 4.0 International License.