Exact Group Fairness Regularization via Classwise Robust Optimization

Sangwon Jung,Taeeon Park,Sanghyuk Chun, Taesup Moon

ICLR 2023(2023)

引用 0|浏览0
暂无评分
摘要
Existing group fairness-aware training methods typically employ some heuristics, such as re-weighting underrepresented groups based on some rules or using approximated surrogates for the exact fairness metrics as regularization terms, which result in models with sub-optimal accuracy-fairness trade-offs. The reason for using such heuristics is that the fairness metrics are usually non-differentiable or non-convex, and exactly incorporating those metrics in a tractable learning objective is challenging. To that end, we propose a principled method that indeed can incorporate an $\textit{exact}$ form of a well-justified group fairness metric, Difference of Conditional Accuracy (DCA), as a regularizer using a $\textit{classwise}$ distributionally robust optimization (DRO) framework. Namely, we first show that the DCA is equivalent (up to a constant) to the average (over the classes) of the roots of the $\textit{variances}$ of group losses, then employ the Group DRO formulation for each class $\textit{separately}$ to convert the non-differentiable DCA (or variance) regularized group-balanced empirical risk minimization to a more tractable minimax optimization. We further develop an efficient iterative optimization algorithm and show that our resulting method, dubbed as FairDRO, makes an interesting connection between the re-weighting based and regularization-based fairness-aware learning. Our experiments show that FairDRO is scalable, easily adaptable to diverse applications, and consistently improves the group fairness on several benchmark datasets in terms of the accuracy-fairness trade-off, compared to recent state-of-the-art baselines.
更多
查看译文
关键词
Group Fairness,DRO
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要