Auditing Private Prediction
CoRR(2024)
摘要
Differential privacy (DP) offers a theoretical upper bound on the potential
privacy leakage of analgorithm, while empirical auditing establishes a
practical lower bound. Auditing techniques exist forDP training algorithms.
However machine learning can also be made private at inference. We propose
thefirst framework for auditing private prediction where we instantiate
adversaries with varying poisoningand query capabilities. This enables us to
study the privacy leakage of four private prediction algorithms:PATE [Papernot
et al., 2016], CaPC [Choquette-Choo et al., 2020], PromptPATE [Duan et al.,
2023],and Private-kNN [Zhu et al., 2020]. To conduct our audit, we introduce
novel techniques to empiricallyevaluate privacy leakage in terms of Renyi DP.
Our experiments show that (i) the privacy analysis ofprivate prediction can be
improved, (ii) algorithms which are easier to poison lead to much higher
privacyleakage, and (iii) the privacy leakage is significantly lower for
adversaries without query control than thosewith full control.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要