Enhancing Group Fairness in Online Settings Using Oblique Decision Forests
ICLR 2024(2023)
摘要
Fairness, especially group fairness, is an important consideration in the
context of machine learning systems. The most commonly adopted group
fairness-enhancing techniques are in-processing methods that rely on a mixture
of a fairness objective (e.g., demographic parity) and a task-specific
objective (e.g., cross-entropy) during the training process. However, when data
arrives in an online fashion – one instance at a time – optimizing such
fairness objectives poses several challenges. In particular, group fairness
objectives are defined using expectations of predictions across different
demographic groups. In the online setting, where the algorithm has access to a
single instance at a time, estimating the group fairness objective requires
additional storage and significantly more computation (e.g., forward/backward
passes) than the task-specific objective at every time step. In this paper, we
propose Aranyani, an ensemble of oblique decision trees, to make fair decisions
in online settings. The hierarchical tree structure of Aranyani enables
parameter isolation and allows us to efficiently compute the fairness gradients
using aggregate statistics of previous decisions, eliminating the need for
additional storage and forward/backward passes. We also present an efficient
framework to train Aranyani and theoretically analyze several of its
properties. We conduct empirical evaluations on 5 publicly available benchmarks
(including vision and language datasets) to show that Aranyani achieves a
better accuracy-fairness trade-off compared to baseline approaches.
更多查看译文
关键词
Fairness,Online Learning,Oblique Decision Trees
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要