Differential Privacy-Enabled Multi-Party Learning with Dynamic Privacy Budget Allocating Strategy

ELECTRONICS(2023)

引用 1|浏览24
暂无评分
摘要
As one of the promising paradigms of decentralized machine learning, multi-party learning has attracted increasing attention, owing to its capability of preventing the privacy of participants from being directly exposed to adversaries. Multi-party learning enables participants to train their model locally without uploading private data to a server. However, recent studies have shown that adversaries may launch a series of attacks on learning models and extract private information about participants by analyzing the shared parameters. Moreover, existing privacy-preserving multi-party learning approaches consume higher total privacy budgets, which poses a considerable challenge to the compromise between privacy guarantees and model utility. To address this issue, this paper explores an adaptive differentially private multi-party learning framework, which incorporates zero-concentrated differential privacy technique into multi-party learning to get rid of privacy threats, and offers sharper quantitative results. We further design a dynamic privacy budget allocating strategy to alleviate the high accumulation of total privacy budgets and provide better privacy guarantees, without compromising the model's utility. We inject more noise into model parameters in the early stages of model training and gradually reduce the volume of noise as the direction of gradient descent becomes more accurate. Theoretical analysis and extensive experiments on benchmark datasets validated that our approach could effectively improve the model's performance with less privacy loss.
更多
查看译文
关键词
multi-party learning,privacy,differential privacy,privacy budget,noise perturbation
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要