How well do hate speech, toxicity, abusive and offensive language classification models generalize across datasets?

Information Processing & Management(2021)

引用 44|浏览22
暂无评分
摘要
A considerable body of research deals with the automatic identification of hate speech and related phenomena. However, cross-dataset model generalization remains a challenge. In this context, we address two still open central questions: (i) to what extent does the generalization depend on the model and the composition and annotation of the training data in terms of different categories?, and (ii) do specific features of the datasets or models influence the generalization potential? To answer (i), we experiment with BERT, ALBERT, fastText, and SVM models trained on nine common public English datasets, whose class (or category) labels are standardized (and thus made comparable), in intra- and cross-dataset setups. The experiments show that indeed the generalization varies from model to model and that some of the categories (e.g., ‘toxic’, ‘abusive’, or ‘offensive’) serve better as cross-dataset training categories than others (e.g., ‘hate speech’). To answer (ii), we use a Random Forest model for assessing the relevance of different model and dataset features during the prediction of the performance of 450 BERT, 450 ALBERT, 450 fastText, and 348 SVM binary abusive language classifiers (1698 in total). We find that in order to generalize well, a model already needs to perform well in an intra-dataset scenario. Furthermore, we find that some other parameters are equally decisive for the success of the generalization, including, e.g., the training and target categories and the percentage of the out-of-domain vocabulary.
更多
查看译文
关键词
00-01,99-00
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要