Diversity-aware fairness testing of machine learning classifiers through hashing-based sampling

INFORMATION AND SOFTWARE TECHNOLOGY(2024)

引用 0|浏览0
暂无评分
摘要
Context: There are growing concerns about algorithmic fairness, as some machine learning (ML)-based algorithms have been found to exhibit biases against protected attributes such as gender, race, age and so on. Individual fairness requires an ML classifier to produce similar outputs for similar individuals. Verification Based Testing (VET) is a state-of-the-art black-box testing algorithm for individual fairness that leverages constraint solving to generate test cases. Objective: Generating diverse test cases is expected to facilitate efficient detection of diverse discriminatory data instances (i. e., cases that violate individual fairness). Hashing-based sampling techniques draw a sample approximately uniformly at random from the set of solutions of given Boolean constraints. We propose VET-X, which improves VET with hashing-based sampling, aiming to improve its testing performance. Method: We realize hashing-based sampling for VET. The challenge is that the off-the-shelf hashing-based sampling techniques cannot be integrated in a straightforward manner because the constraints in VET are generally not Boolean. Moreover, we propose several enhancement techniques to make VET-X more efficient. Results: To evaluate our method, we conduct experiments, where VET-X is compared to VET, SG and ExpGA (other well-known fairness testing algorithms) over a set of configurations consisting of several datasets, protected attributes, and ML classifiers. The results show that, with each configuration, VET-X detects more discriminatory data instances with higher diversity than VET and SG. VET-X detects discriminatory data instances with higher diversity than ExpGA, though the number of discriminatory data instances detected by VET-X is lesser than ExpGA. Conclusion: Our proposed method performs better than other state-of-the-art black-box fairness testing algorithms, particularly in terms of diversity. Our method can serve to efficiently identify flaws in ML classifiers with respect to individual fairness for subsequent improvements of an ML classifier. On the other hand, although our method is specific to individual fairness, it could work for testing other aspects of a software system such as security and counterfactual explanations with some technical adaptations, which remains for future work.
更多
查看译文
关键词
Algorithm fairness,Fairness testing,SAT/SMT solving,Constraint sampling,Hashing-based technique
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要