One-shot Empirical Privacy Estimation for Federated Learning
arxiv(2023)
摘要
Privacy estimation techniques for differentially private (DP) algorithms are
useful for comparing against analytical bounds, or to empirically measure
privacy loss in settings where known analytical bounds are not tight. However,
existing privacy auditing techniques usually make strong assumptions on the
adversary (e.g., knowledge of intermediate model iterates or the training data
distribution), are tailored to specific tasks, model architectures, or DP
algorithm, and/or require retraining the model many times (typically on the
order of thousands). These shortcomings make deploying such techniques at scale
difficult in practice, especially in federated settings where model training
can take days or weeks. In this work, we present a novel "one-shot" approach
that can systematically address these challenges, allowing efficient auditing
or estimation of the privacy loss of a model during the same, single training
run used to fit model parameters, and without requiring any a priori knowledge
about the model architecture, task, or DP training algorithm. We show that our
method provides provably correct estimates for the privacy loss under the
Gaussian mechanism, and we demonstrate its performance on well-established FL
benchmark datasets under several adversarial threat models.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要