Automatic annotation of protected attributes to support fairness optimization

INFORMATION SCIENCES(2024)

引用 0|浏览3
暂无评分
摘要
Recent research has shown that the unaware automation of high -risk decision -making tasks can result in unfair decisions being made. The most common approaches to address this problem adopt definitions of fairness based on protected attributes. Precise annotation of protected attributes enables the application of bias mitigation techniques to commonly unlabeled kinds of data (e.g., images, text, etc.). This paper proposes a framework to automatically annotate protected attributes in data collections. The framework focuses on providing a single interface to annotate protected attributes of different types (e.g., gender, race, etc.) and from different kinds of data. Internally, the framework coordinates multiple sensors to produce the final annotation. Several sensors for textual data are proposed. An optimization search technique is designed to tune the framework to specific domains. Additionally, a small dataset of movie reviews -annotated with gender and sentiment- was created. The evaluation in datasets of texts from diverse domains shows the quality of the annotations and their effectiveness to be used as a proxy to estimate fairness in datasets and machine learning models. The source code is available online for the research community.
更多
查看译文
关键词
Fairness,Gender bias,Natural language processing,Corpus,Optimization
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要