TPMDP: Threshold Personalized Multi-party Differential Privacy via Optimal Gaussian Mechanism

CoRR(2023)

引用 0|浏览42
暂无评分
摘要
In modern distributed computing applications, such as federated learning and AIoT systems, protecting privacy is crucial to prevent misbehaving parties from colluding to steal others' private information. However, guaranteeing the utility of computation outcomes while protecting all parties' privacy can be challenging, particularly when the parties' privacy requirements are highly heterogeneous. In this paper, we propose a novel privacy framework for multi-party computation called Threshold Personalized Multi-party Differential Privacy (TPMDP), which addresses a limited number of semi-honest colluding adversaries. Our framework enables each party to have a personalized privacy budget. We design a multi-party Gaussian mechanism that is easy to implement and satisfies TPMDP, wherein each party perturbs the computation outcome in a secure multi-party computation protocol using Gaussian noise. To optimize the utility of the mechanism, we cast the utility loss minimization problem into a linear programming (LP) problem. We exploit the specific structure of this LP problem to compute the optimal solution after O(n) computations, where n is the number of parties, while a generic solver may require exponentially many computations. Extensive experiments demonstrate the benefits of our approach in terms of low utility loss and high efficiency compared to existing private mechanisms that do not consider personalized privacy requirements or collusion thresholds.
更多
查看译文
关键词
Differential privacy,secure multi-party computation,personalized privacy,distributed computing
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要