FairerCLIP: Debiasing CLIP's Zero-Shot Predictions using Functions in RKHSs
arxiv(2024)
摘要
Large pre-trained vision-language models such as CLIP provide compact and
general-purpose representations of text and images that are demonstrably
effective across multiple downstream zero-shot prediction tasks. However, owing
to the nature of their training process, these models have the potential to 1)
propagate or amplify societal biases in the training data and 2) learn to rely
on spurious features. This paper proposes FairerCLIP, a general approach for
making zero-shot predictions of CLIP more fair and robust to spurious
correlations. We formulate the problem of jointly debiasing CLIP's image and
text representations in reproducing kernel Hilbert spaces (RKHSs), which
affords multiple benefits: 1) Flexibility: Unlike existing approaches, which
are specialized to either learn with or without ground-truth labels, FairerCLIP
is adaptable to learning in both scenarios. 2) Ease of Optimization: FairerCLIP
lends itself to an iterative optimization involving closed-form solvers, which
leads to 4×-10× faster training than the existing methods. 3)
Sample Efficiency: Under sample-limited conditions, FairerCLIP significantly
outperforms baselines when they fail entirely. And, 4) Performance:
Empirically, FairerCLIP achieves appreciable accuracy gains on benchmark
fairness and spurious correlation datasets over their respective baselines.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要