Development of the Naïve Algorithm of Support System for the Hate Speech Detection in Social Media platforms

Authors

  • Ehteshamuddin Research Scholar, SHEAT College of Engineering, Varanasi, Uttar Pradesh, India Author
  • Sonam Singh Assistant Professor, SHEAT College of Engineering, Varanasi, Uttar Pradesh, India Author

Keywords:

Support System, Hate Speech Detection, Social Media platforms

Abstract

While social media platforms provide a prominent space for users to participate in interpersonal conversations and express their viewpoints, the anonymity and façade afforded by these platforms may enable users to disseminate hate speech and objectionable information. Due to the extensive size of these platforms, there is a need to automatically detect and mark occurrences of hate speech. While there are several approaches available for detecting hate speech, the majority of these methods are intentionally designed to be non-interpretable or unexplainable. In this work, we want to overcome the problem of not being able to understand the results clearly. To do this, we suggest using advanced Large Language Models (LLMs) to extract certain characteristics from the input text. These characteristics, called rationales, will be used to train a basic hate speech classifier. This approach will ensure that the results are easily understandable and accurate. Our system successfully integrates the linguistic comprehension powers of LLMs with the discerning strength of cutting-edge hate speech classifiers to ensure that these classifiers are accurately interpretable. Our thorough assessment of many English language social media hate speech datasets reveals two key findings: (1) the effectiveness of the LLM-extracted rationales, and (2) the unexpected preservation of detector performance despite training for interpretability. Given the current rapid increase and exponential expansion of social media use, it is crucial to thoroughly examine social media information to identify any instances of hostile content. Researchers have been rigorously studying the differentiation between information that encourages hate and stuff that does not over the last decade. We propose a method to determine whether a speech fosters hatred or not by using both auditory and textual representations. The foundation of our technique relies on the Transformer architecture, which integrates audio and text sampling. Additionally, we have developed a unique layer dubbed "Attentive Fusion". Our research yielded superior outcomes compared to prior cutting-edge approaches, reaching an outstanding macro F1 score of 0.927 on the Test Set

Downloads

Download data is not yet available.

References

Rosana Ardila, Megan Branson, Kelly Davis, Michael Kohler, Josh Meyer, Michael Henretty, Reuben Morais, Lindsay Saunders, Francis Tyers, and Gregor Weber. 2020. Common voice: A massively- multilingual speech corpus. In Proceedings of the Twelfth Language Resources and Evaluation Conference, pages 4218–4222, Marseille, France. European Language Resources Association.

AmirAli Bagher Zadeh, Paul Pu Liang, Soujanya Poria, Erik Cambria, and Louis-Philippe Morency. 2018. Multimodal language analysis in the wild: CMU- MOSEI dataset and interpretable dynamic fusion graph. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Studys), pages 2236–2246, Melbourne, Australia. Association for Computational Linguistics.

Raj Dabre, Himani Shrotriya, Anoop Kunchukuttan, Ratish Puduppully, Mitesh Khapra, and Pratyush Kumar. 2022. IndicBART: A pre-trained model for indic natural language generation. In Findings of the Association for Computational Linguistics: ACL 2022, pages 1849–1863, Dublin, Ireland. Association for Computational Linguistics.

Thomas Davidson, Dana Warmsley, Michael Macy, and Ingmar Weber. 2017. Automated hate speech detec- tion and the problem of offensive language. Proceedings of the International AAAI Conference on Web and Social Media, 11.

Paula Fortuna and Sérgio Nunes. 2018. A survey on automatic detection of hate speech in text. ACM Comput. Surv., 51(4).

Sreyan Ghosh, Samden Lepcha, S Sakshi, Rajiv Ratn Shah, and Srinivasan Umesh. 2022. DeToxy: A Large-Scale Multimodal Dataset for Toxicity Classi- fication in Spoken Utterances. In Proc. Interspeech 2022, pages 5185–5189.

Keith Ito and Linda Johnson. 2017. The lj speech dataset. https://keithito.com/ LJ-Speech-Dataset/.

Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. 2020. ALBERT: A lite BERT for self-supervised learning of language representations. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net.

Ilya Loshchilov and Frank Hutter. 2019. Decoupled weight decay regularization.

Soujanya Poria, Devamanyu Hazarika, Navonil Majumder, Gautam Naik, Erik Cambria, and Rada Mihalcea. 2019. MELD: A multimodal multi-party dataset for emotion recognition in conversations. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 527– 536, Florence, Italy. Association for Computational Linguistics.

Alec Radford, Jong Wook Kim, Tao Xu, Greg Brockman, Christine McLeavey, and Ilya Sutskever. 2022. Robust speech recognition via large-scale weak su- pervision.

Aneri Rana and Sonali Jha. 2022. Emotion based hate speech detection using multimodal learning. ArXiv, abs/2202.06218.

Anna Schmidt and Michael Wiegand. 2017. A survey on hate speech detection using natural language pro- cessing. In Proceedings of the Fifth International Workshop on Natural Language Processing for Social Media, pages 1–10, Valencia, Spain. Association for Computational Linguistics.

Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: A simple way to prevent neural networks from overfitting. J. Mach. Learn. Res., 15(1):1929–1958.

Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, volume 30. Curran Associates, Inc.

Junichi Yamagishi, Christophe Veaux, and Kirsten MacDonald. 2019. CSTR VCTK Corpus: English multi- speaker corpus for CSTR voice cloning toolkit (ver- sion 0.92).

Amir Zadeh, Michael Chan, Paul Pu Liang, Edmund Tong, and Louis-Philippe Morency. 2019. Social- iq: A question answering benchmark for artificial social intelligence. In 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 8799–8809.

Amir Zadeh, Rowan Zellers, Eli Pincus, and LouisPhilippe Morency. 2016. Multimodal sentiment in- tensity analysis in videos: Facial gestures and verbal messages. IEEE Intelligent Systems, 31(6):82–88.

Downloads

Published

16-05-2024

Issue

Section

Research Articles

How to Cite

Development of the Naïve Algorithm of Support System for the Hate Speech Detection in Social Media platforms. (2024). International Journal of Scientific Research in Science and Technology, 11(3), 855-864. https://ijsrst.com/index.php/home/article/view/IJSRST2411361

Similar Articles

1-10 of 116

You may also start an advanced similarity search for this article.