The Neutrality Fallacy: When Algorithmic Fairness Interventions are (Not) Positive Action
arxiv(2024)
摘要
Various metrics and interventions have been developed to identify and
mitigate unfair outputs of machine learning systems. While individuals and
organizations have an obligation to avoid discrimination, the use of
fairness-aware machine learning interventions has also been described as
amounting to 'algorithmic positive action' under European Union (EU)
non-discrimination law. As the Court of Justice of the European Union has been
strict when it comes to assessing the lawfulness of positive action, this would
impose a significant legal burden on those wishing to implement fair-ml
interventions. In this paper, we propose that algorithmic fairness
interventions often should be interpreted as a means to prevent discrimination,
rather than a measure of positive action. Specifically, we suggest that this
category mistake can often be attributed to neutrality fallacies: faulty
assumptions regarding the neutrality of fairness-aware algorithmic
decision-making. Our findings raise the question of whether a negative
obligation to refrain from discrimination is sufficient in the context of
algorithmic decision-making. Consequently, we suggest moving away from a duty
to 'not do harm' towards a positive obligation to actively 'do no harm' as a
more adequate framework for algorithmic decision-making and fair
ml-interventions.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要