Fairness Through the Lens of Proportional Equality

adaptive agents and multi-agents systems(2019)

引用 4|浏览7
暂无评分
摘要
Today, automated algorithms, such as machine learning classifiers, are playing an increasingly pivotal role in important societal decisions such as hiring, loan allocation, and criminal risk assessment. This motivates the need to probe the outcomes of a prediction model for discriminatory traits towards specific groups of individuals. In this context, one of the crucial challenges is to formally define a satisfactory notion of fairness. Our contribution in this paper is to formalize Proportional Equality (PE) as a fairness notion. We additionally show that it is a more appropriate criterion than the existing popular notion called Disparate Impact (DI), which is used for evaluating the fairness of a classifier's outcomes.
更多
查看译文
关键词
Classification,Discrimination,Racial bias,Gender bias,Prior probability shifts,Fairness concepts
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要