HateModerate: Testing Hate Speech Detectors against Content Moderation Policies
arxiv(2023)
摘要
To protect users from massive hateful content, existing works studied
automated hate speech detection. Despite the existing efforts, one question
remains: do automated hate speech detectors conform to social media content
policies? A platform's content policies are a checklist of content moderated by
the social media platform. Because content moderation rules are often uniquely
defined, existing hate speech datasets cannot directly answer this question.
This work seeks to answer this question by creating HateModerate, a dataset
for testing the behaviors of automated content moderators against content
policies. First, we engage 28 annotators and GPT in a six-step annotation
process, resulting in a list of hateful and non-hateful test suites matching
each of Facebook's 41 hate speech policies. Second, we test the performance of
state-of-the-art hate speech detectors against HateModerate, revealing
substantial failures these models have in their conformity to the policies.
Third, using HateModerate, we augment the training data of a top-downloaded
hate detector on HuggingFace. We observe significant improvement in the models'
conformity to content policies while having comparable scores on the original
test data. Our dataset and code can be found in the attachment.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要