A Game-Theoretic Analysis of Auditing Differentially Private Algorithms with Epistemically Disparate Herd
arxiv(2024)
摘要
Privacy-preserving AI algorithms are widely adopted in various domains, but
the lack of transparency might pose accountability issues. While auditing
algorithms can address this issue, machine-based audit approaches are often
costly and time-consuming. Herd audit, on the other hand, offers an alternative
solution by harnessing collective intelligence. Nevertheless, the presence of
epistemic disparity among auditors, resulting in varying levels of expertise
and access to knowledge, may impact audit performance. An effective herd audit
will establish a credible accountability threat for algorithm developers,
incentivizing them to uphold their claims. In this study, our objective is to
develop a systematic framework that examines the impact of herd audits on
algorithm developers using the Stackelberg game approach. The optimal strategy
for auditors emphasizes the importance of easy access to relevant information,
as it increases the auditors' confidence in the audit process. Similarly, the
optimal choice for developers suggests that herd audit is viable when auditors
face lower costs in acquiring knowledge. By enhancing transparency and
accountability, herd audit contributes to the responsible development of
privacy-preserving algorithms.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要