Smart Agriculture: Enhancing Security Through Animal Detection Via Deep Learning and Computer Vision

Authors

  • A Samuvel PG Student, Kings Engineering College, Sriperumbudhur, Tamil Nadu, India Author
  • Dr G Manikandan Professor, Kings Engineering College, Sriperumbudhur, Tamil Nadu, India Author
  • Ms. Vilma Veronica Assistant Professor, Kings Engineering College, Sriperumbudhur, Tamil Nadu, India Author
  • Ms. S. Hemalatha Assistant Professor, Kings Engineering College, Sriperumbudhur, Tamil Nadu, India Author

DOI:

https://doi.org/10.32628/IJSRST52411226

Keywords:

Animal Detection, YOLO V6 TECHNOLOGY, Convolutional Neural Network, Video Surveillance, Wild Animal Monitoring, Creating of Alert Message

Abstract

Agriculture stands as a crucial sector, making significant contributions to the economies of many countries. Nevertheless, it encounters various challenges, one of which is animal disruption. This poses a considerable threat to crops, leading to financial losses for farmers. In response to this concern, we have engineered an animal disruption warning system for agricultural settings based on YOLOv6 technology.
The system operates by analyzing live video feeds from strategically placed cameras. Utilizing deep learning algorithms, it can detect and classify animals in real-time. The computer vision algorithms enable tracking and prediction of animal movements. Upon detection, the system promptly sends alerts, enabling timely and appropriate actions.
In this paper, we periodically monitor the entire farm through a camera that continuously records its surroundings. The identification of animal entry is achieved using a deep learning model, and alarm systems serve as a deterrent, notifying forest officials. This report provides details on the libraries and convolutional neural networks employed in constructing the model.
This research focuses on the implementation of a robust animal detection system in agricultural environments, leveraging the capabilities of deep learning. The project utilizes state-of-the-art deep neural networks and computer vision algorithms to analyze live video feeds from strategically positioned cameras across the farm. The deep learning model is trained to detect and classify various animals in real-time, contributing to the early identification of potential threats to crops.
The system employs sophisticated computer vision techniques, enabling accurate tracking and prediction of animal movements within the monitored areas. Upon detection, the system triggers timely alerts, providing farmers with the necessary information to take swift and appropriate actions, thereby mitigating potential damage to crops.
To achieve these objectives, the project involves periodic monitoring of the entire farm through a camera that continuously records its surroundings. The deep learning model, supported by alarm systems, effectively identifies animal entries, serving as a proactive deterrent. This research report outlines the libraries, frameworks, and convolutional neural networks employed in the development of the animal detection model, shedding light on the technical aspects of its implementation.
The integration of deep learning and computer vision in agriculture not only enhances crop protection but also contributes to the sustainable and efficient management of farming practices. This research offers insights into the potential of advanced technologies to address challenges in agriculture and opens avenues for further exploration in the intersection of technology and agriculture.              

Downloads

Download data is not yet available.

References

J. Dai, Y. Li, K. He, and J. Sun, ‘‘R-FCN: Object detection via region-based fully convolutional networks,’’ 2016, arXiv:1605.06409.

M. De Gregorio and M. Giordano, ‘‘Change detection with weightless neural networks,’’ in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. Workshops, Jun. 2014, pp. 409–413.

B. Natarajan, E. Rajalakshmi, R. Elakkiya, K. Kotecha, A. Abraham,

L. A. Gabralla, and V. Subramaniyaswamy, ‘‘Development of an end-to- end deep learning framework for sign language recognition, translation, and video generation,’’ IEEE Access, vol. 10, pp. 104358–104374, 2022.

W. Dong, P. Roy, C. Peng, and V. Isler, ‘‘Ellipse R-CNN: Learning to infer elliptical object from clustering and occlusion,’’ IEEE Trans. Image Process., vol. 30, pp. 2193–2206, 2021.

R. Elakkiya, P. Vijayakumar, and M. Karuppiah, ‘‘COVID_SCREENET: COVID-19 screening in chest radiography images using deep transfer stacking,’’ Inf. Syst. Frontiers, vol. 23, pp. 1369–1383, Mar. 2021.

R. Elakkiya, V. Subramaniyaswamy, V. Vijayakumar, and A. Mahanti, ‘‘Cervical cancer diagnostics healthcare system using hybrid object detec- tion adversarial networks,’’ IEEE J. Biomed. Health Informat., vol. 26, no. 4, pp. 1464–1471, Apr. 2022.

R. Elakkiya, K. S. S. Teja, L. Jegatha Deborah, C. Bisogni, and

C. Medaglia, ‘‘Imaging based cervical cancer diagnostics using small object detection—Generative adversarial networks,’’ Multimedia Tools Appl., vol. 81, pp. 1–17, Feb. 2022.

A. Elgammal, D. Harwood, and L. Davis, ‘‘Non-parametric model for background subtraction,’’ in Computer Vision—ECCV. Dublin, Ireland: Springer, Jun. 2000, pp. 751–767.

D. Erhan, C. Szegedy, A. Toshev, and D. Anguelov, ‘‘Scalable object detection using deep neural networks,’’ in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., Jun. 2014, pp. 2155–2162.

G. Farnebäck, ‘‘Two-frame motion estimation based on polynomial expan- sion,’’ in Proc. 13th Scandin. Conf. (SCIA). Halmstad, Sweden: Springer, Jul. 2003, pp. 363–370.

R. Girshick, ‘‘Fast R-CNN,’’ in Proc. IEEE Int. Conf. Comput. Vis. (ICCV), Dec. 2015, pp. 1440–1448.

R. Girshick, J. Donahue, T. Darrell, and J. Malik, ‘‘Rich feature hierarchies for accurate object detection and semantic segmentation,’’ in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., Jun. 2014, pp. 580–587.

N. Goyette, P. Jodoin, F. Porikli, J. Konrad, and P. Ishwar, ‘‘Changede- tection.Net: A new change detection benchmark dataset,’’ in Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit. Workshops, Jun. 2012, pp. 1–8.

K. He, X. Zhang, S. Ren, and J. Sun, ‘‘Spatial pyramid pooling in deep convolutional networks for visual recognition,’’ IEEE Trans. Pattern Anal. Mach. Intell., vol. 37, no. 9, pp. 1904–1916, Sep. 2015.

J. Imran and B. Raman, ‘‘Evaluating fusion of RGB-D and inertial sensors for multimodal human action recognition,’’ J. Ambient Intell. Humanized Comput., vol. 11, no. 1, pp. 189–208, Jan. 2020.

F. Kahl, R. Hartley, and V. Hilsenstein, ‘‘Novelty detection in image sequences with dynamic background,’’ in Statistical Methods in Video Processing, Prague, Czech Republic: Springer, May 2004, pp. 117–128.

T. Liang, H. Bao, W. Pan, and F. Pan, ‘‘Traffic sign detection via improved sparse R-CNN for autonomous vehicles,’’ J. Adv. Transp., vol. 2022, pp. 1–16, Mar. 2022.

T. Liang, H. Bao, W. Pan, X. Fan, and H. Li, ‘‘DetectFormer: Category- assisted transformer for traffic scene object detection,’’ Sensors, vol. 22, no. 13, p. 4833, Jun. 2022.

G. Li and Y. Yu, ‘‘Deep contrast learning for salient object detection,’’ in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), Jun. 2016, pp. 478–487.

W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed, C.-Y. Fu, and

C. Berg, ‘‘SSD: Single shot multibox detector,’’ in Computer Vision— ECCV. Amsterdam, The Netherlands: Springer, 2016, pp. 21–37.

Downloads

Published

03-04-2024

Issue

Section

Research Articles

How to Cite

Smart Agriculture: Enhancing Security Through Animal Detection Via Deep Learning and Computer Vision . (2024). International Journal of Scientific Research in Science and Technology, 11(2), 140-159. https://doi.org/10.32628/IJSRST52411226

Most read articles by the same author(s)

Similar Articles

1-10 of 173

You may also start an advanced similarity search for this article.