Sign Language Detection and Recognition Using Media Pipe and Deep Learning Algorithm

Authors

  • Ms. E J Honesty Praiselin PG Student, Kings Engineering College, Sriperumbudhur, Tamil Nadu, India Author
  • Dr. G Manikandan Professor, Kings Engineering College, Sriperumbudhur, Tamil Nadu, India, India Author
  • Ms. Vilma Veronica Assistant Professor, Kings Engineering College, Sriperumbudhur, Tamil Nadu, India Author
  • Ms. S. Hemalatha Assistant Professor, Kings Engineering College, Sriperumbudhur, Tamil Nadu, India Author

DOI:

https://doi.org/10.32628/IJSRST52411223

Keywords:

Sign Language, Image Recognition, Machine Learning, Features Extraction

Abstract

People lacking the sense of hearing and the ability to speak have undeniable communication problems in their life. People with hearing and speech problems communicate using sign language with themselves and others. These communicating signs are made up of the shape of the hand and movement. Sign language is not essentially known to a more significant portion of the human population who uses spoken and written language for communication. Therefore, it is a necessity to develop technological tools for interpretation of sign language. Much research have been carried out to acknowledge sign language using technology for most global languages. But there are still scopes of development of tools and techniques for sign language development for local dialects. 
This work attempts to develop a technical approach for recognizing American Sign Language Using machine learning techniques, this work tried to establish a system for identifying the hand gestures from American Sign Language. A combination of two-dimensional and three-dimensional images of Assamese gestures has been used to prepare a dataset. The Media Pipe framework has been implemented to detect landmarks in the images. The results reveal that the method implemented in this work is effective for the recognition of the other alphabets and gestures in Sign Language. This method could also be tried and tested for the recognition of signs and gestures for various other local languages of India              

Downloads

Download data is not yet available.

References

A. Z. Shukor, M. F. Miskon, M. H. Jamaluddin, F. bin Ali, M. F. Asyraf, M. B. bin Bahar and others. (2015) "A new data glove approach for Malaysian sign language detection," Procedia Computer Science, vol. 76, p. 60–67.

M. Mohandes, M. Deriche and J. Liu. (2014) "Image-based and sensor-based approaches to Arabic sign language recognition," IEEE transactions on human-machine systems, vol. 44, p. 551–557.

N. M. Kakoty and M. D. Sharma. (2018) "Recognition of sign language alphabets and numbers based on hand kinematics using a data glove," Procedia Computer Science, vol. 133, p. 55–62.

Z. A. Ansari and G. Harit. (2016) "Nearest neighbour classification of Indian sign language gestures using kinect camera," Sadhana, vol. 41, p. 161–182.

A. Das, S. Gawde, K. Suratwala and D. Kalbande. (2018) "Sign language recognition using deep learning on custom processed static gesture images," in International Conference on Smart City and Emerging Technology (ICSCET)

C. Lugaresi, J. Tang, H. Nash, C. McClanahan, E. Uboweja, M. Hays, F. Zhang, C.-L. Chang, M. G. Yong, J. Lee and others. (2019) "Mediapipe: A framework for building perception pipelines," arXiv preprint arXiv:1906.08172.

A. K. Sahoo. (2021) "Indian sign language recognition using machine learning techniques," in Macromolecular Symposia.

J. Rekha, J. Bhattacharya and S. Majumder. (2011) "Shape, texture and local movement hand gesture features for indian sign language recognition," in 3rd international conference on trendz in information sciences & computing (TISC2011).

M. K. Bhuyan, M. K. Kar and D. R. Neog. (2011) "Hand pose identification from monocular image for sign language recognition," in 2011 IEEE International Conference on Signal and Image Processing Applications (ICSIPA).

N. Pugeault and R. Bowden. (2011) "Spelling it out: Real-time ASL fingerspelling recognition," in 2011 IEEE International conference on computer vision workshops (ICCV workshops).

C. Keskin, F. Kıraç, Y. E. Kara and L. Akarun. (2013) "Real time hand pose estimation using depth sensors," in Consumer depth cameras for computer vision, Springer, p. 119–137.

A. Halder and A. Tayad. (2021) "Real-time vernacular sign language recognition using mediapipe and machine learning," Journal homepage: www. ijrpr. com ISSN, vol. 2582, p. 7421.

D. A. Kumar, A. S. C. S. Sastry, P. V. V. Kishore and E. K. Kumar. (2022) "3D sign language recognition using spatio temporal graph kernels," Journal of King Saud University-Computer and Information Sciences.

Z. Ren, J. Yuan and Z. Zhang. (2011) "Robust hand gesture recognition based on finger-earth mover's distance with a commodity depth camera," in Proceedings of the 19th ACM international conference on Multimedia.

Downloads

Published

03-04-2024

Issue

Section

Research Articles

How to Cite

Sign Language Detection and Recognition Using Media Pipe and Deep Learning Algorithm . (2024). International Journal of Scientific Research in Science and Technology, 11(2), 123-130. https://doi.org/10.32628/IJSRST52411223

Most read articles by the same author(s)

Similar Articles

1-10 of 44

You may also start an advanced similarity search for this article.