Operationalizing content moderation "accuracy" in the Digital Services Act
CoRR(2023)
摘要
The Digital Services Act, recently adopted by the EU, requires social media
platforms to report the "accuracy" of their automated content moderation
systems. The colloquial term is vague, or open-textured – the literal accuracy
(number of correct predictions divided by the total) is not suitable for
problems with large class imbalance, and the ground truth and dataset to
measure accuracy against is unspecified. Without further specification, the
regulatory requirement allows for deficient reporting. In this
interdisciplinary work, we operationalize "accuracy" reporting by refining
legal concepts and relating them to technical implementation. We start by
elucidating the legislative purpose of the Act to legally justify an
interpretation of "accuracy" as precision and recall. These metrics remain
informative in class imbalanced settings, and reflect the proportional
balancing of Fundamental Rights of the EU Charter. We then focus on the
estimation of recall, as its naive estimation can incur extremely high
annotation costs and disproportionately interfere with the platform's right to
conduct business. Through a simulation study, we show that recall can be
efficiently estimated using stratified sampling with trained classifiers, and
provide concrete recommendations for its application. Finally, we present a
case study of recall reporting for a subset of Reddit under the Act. Based on
the language in the Act, we identify a number of ways recall could be reported
due to underspecification. We report on one possibility using our improved
estimator, and discuss the implications and need for legal clarification.
更多查看译文
关键词
content moderation,digital services
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要