The Role of In-Group Bias and Balanced Data: A Comparison of Human and Machine Recidivism Risk Predictions

The Compass(2020)

引用 7|浏览1
暂无评分
摘要
Fairness and bias in automated decision-making gain importance as the prevalence of algorithms increases in different areas of social life. This paper contributes to the discussion of algorithmic fairness with a crowdsourced vignette survey on recidivism risk assessment, which we compare to previous studies on this topic and to predictions of an automated recidivism risk tool. We use the case of the Correctional Offender Management Profiling for Alternative Sanctions (COMPAS) and the Broward County dataset of pre-trial defendants as a data source and for purposes of comparability with the earlier analysis. In our survey, each respondent assessed recidivism risk for a set of vignettes describing real defendants, where each set was balanced with regard to the defendants\u0027 race and re-offender status. The survey ensured a 50: 50 ratio of black and white respondents. We found that predictions in our survey---while less accurate---were considerably more fair in terms of equalized odds than previous surveys. We attribute it to the differences in survey design: using the balanced set of vignettes and not providing feedback after responding to each vignette. We also analyzed the performance and fairness of predictions by race of respondent and defendant. We found that both white and black respondents tend to favor defendants of their own race, but the magnitude of the effect is relatively small. In addition to the survey, we train two statistical models, one trained with balanced data and other with unbalanced data. We observe that the model trained on balanced data is substantially more fair and possess less in-group bias.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要