A transfer learning approach for detecting offensive and hate speech on social media platforms

MULTIMEDIA TOOLS AND APPLICATIONS(2023)

引用 3|浏览7
暂无评分
摘要
Over the last few decades, the expansion of technology and the internet has led to the number of users proliferating on social media, with a simultaneous increase in hate speech. A critical concern is, hate speech is not only responsible for igniting violence and spreading hatred, but its detection also requires a considerable amount of computing resources and content monitoring by human experts and algorithms. While the research is an active area, and several artificial intelligence techniques have been proposed in the past to address the concern, the rise in the number of petabytes of the content generated calls for methods that will exhibit improved performance and reduced model development time. We propose a transfer learning approach for detecting hate and offensive speech on social media that deploys a pre-trained model for data analysis thereby promoting model reusability. We propose two transfer learning models, i.e. Google’s Word2vec model using LSTM and GloVe Model using LSTM for the same and compare the performance of our proposed model against unigram and bigram language models for Naive Bayes (NB), Decision Trees (DT), and Support Vector Machines (SVM), which are also the baseline algorithms considered for analysis. The performance of the proposed models for classifying hate speech, offensive speech, and neutral speech is validated using metrics such as precision, recall, F-1 score, and support. The overall performance of the models across multiple datasets has been evaluated with respect to accuracy. In-depth experimental analysis and results depict that the proposed model is significantly robust for detecting hateful and offensive speech and also performs better than the considered baseline algorithms.
更多
查看译文
关键词
Hate speech,Transfer learning,Word2vec model,GloVe model,LSTM
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要