Privacy for Fairness: Information Obfuscation for Fair Representation Learning with Local Differential Privacy
CoRR(2024)
摘要
As machine learning (ML) becomes more prevalent in human-centric
applications, there is a growing emphasis on algorithmic fairness and privacy
protection. While previous research has explored these areas as separate
objectives, there is a growing recognition of the complex relationship between
privacy and fairness. However, previous works have primarily focused on
examining the interplay between privacy and fairness through empirical
investigations, with limited attention given to theoretical exploration. This
study aims to bridge this gap by introducing a theoretical framework that
enables a comprehensive examination of their interrelation. We shall develop
and analyze an information bottleneck (IB) based information obfuscation method
with local differential privacy (LDP) for fair representation learning. In
contrast to many empirical studies on fairness in ML, we show that the
incorporation of LDP randomizers during the encoding process can enhance the
fairness of the learned representation. Our analysis will demonstrate that the
disclosure of sensitive information is constrained by the privacy budget of the
LDP randomizer, thereby enabling the optimization process within the IB
framework to effectively suppress sensitive information while preserving the
desired utility through obfuscation. Based on the proposed method, we further
develop a variational representation encoding approach that simultaneously
achieves fairness and LDP. Our variational encoding approach offers practical
advantages. It is trained using a non-adversarial method and does not require
the introduction of any variational prior. Extensive experiments will be
presented to validate our theoretical results and demonstrate the ability of
our proposed approach to achieve both LDP and fairness while preserving
adequate utility.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要