Application of Transformer-Based Language Models to Detect Hate Speech in Social Media

Authors

  • Swapnanil Mukherjee Department of Computer Science, Ashoka University, India
  • Sujit Das Department of Computer Science and Engineering, National Institute of Technology Warangal, India

DOI:

https://doi.org/10.47852/bonviewJCCE2022010102

Keywords:

transformer, hate speech, social media, RoBERTa, XLNet, fine-tuning, natural language processing

Abstract

Detecting and removing hateful speech in various online social media is a challenging task. Researchers tried to solve this problem by using both classical and deep learning methods, which are found to have limitations in terms of the requirement of extensive hand-crafted features, model architecture design, and pretrained embeddings, that are not very proficient in capturing semantic relations between words. Therefore, in this paper, we tackle the problem using Transformer-based pretrained language models which are specially designed to produce contextual embeddings of text sequences. We have evaluated two such models—RoBERTa and XLNet—using four publicly available datasets from different social media platforms and compared them to the existing baselines. Our investigation shows that the Transformer-based models either surpass or match all of the existing baseline scores by significant margins obtained by previously used models such as 1-dimensional convolutional neural network (1D-CNN) and long short-term memory (LSTM). The Transformer-based models proved to be more robust by achieving native performance when trained and tested on two different datasets. Our investigation also revealed that variations in the characteristics of the data produce significantly different results with the same model. From the experimental observations, we are able to establish that Transformer-based language models exhibit superior performance than their conventional counterparts at a fraction of the computation cost and minimal need for complex model engineering.

 

Received: 28 August 2021 | Revised: 20 October 2021 | Accepted: 29 October 2021

 

Conflicts of Interest

The authors declare that they have no conflicts of interest to this work.


Metrics

Metrics Loading ...

Downloads

Published

2021-12-17

Issue

Section

Research Articles

How to Cite

Mukherjee , S. ., & Das, S. (2021). Application of Transformer-Based Language Models to Detect Hate Speech in Social Media. Journal of Computational and Cognitive Engineering, 2(4), 278-286. https://doi.org/10.47852/bonviewJCCE2022010102