Debiasing Credit Scoring using Evolutionary Algorithms

arxiv(2021)

引用 0|浏览0
暂无评分
摘要
This paper investigates the application of machine learning when training a credit decision model over real, publicly available data whilst accounting for "bias objectives". We use the term "bias objective" to describe the requirement that a trained model displays discriminatory bias against a given groups of individuals that doesn't exceed a prescribed level, where such level can be zero. This research presents an empirical study examining the tension between competing model training objectives which in all cases include one or more bias objectives. This work is motivated by the observation that the parties associated with creditworthiness models have requirements that can not certainly be fully met simultaneously. The research herein seeks to highlight the impracticality of satisfying all parties' objectives, demonstrating the need for "trade-offs" to be made. The results and conclusions presented by this paper are of particular importance for all stakeholders within the credit scoring industry that rely upon artificial intelligence (AI) models as part of the decision-making process when determining the creditworthiness of individuals. This paper provides an exposition of the difficulty of training AI models that are able to simultaneously satisfy multiple bias objectives whilst maintaining acceptable levels of accuracy. Stakeholders should be aware of this difficulty and should acknowledge that some degree of discriminatory bias, across a number of protected characteristics and formulations of bias, cannot be avoided.
更多
查看译文
关键词
credit scoring,evolutionary algorithms
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要