A Federated Learning Framework Based on Differentially Private Continuous Data Release

IEEE Transactions on Dependable and Secure Computing(2024)

引用 0|浏览1
暂无评分
摘要
Federated learning (FL) provides a learning framework without participants sharing local raw data, but individual privacy is still at risk of disclosure through attacking the trained models. Due to the strong privacy guarantee, differential privacy (DP) is widely applied to FL to avoid privacy leakage. Traditional private learning adds noise directly to the gradients. The continuous accumulated noise on parameter models severely impairs learning effectiveness. To solve this problem, we introduce the idea of differentially private continuous data release (DPCR) into FL and propose an FL framework based on DPCR (FL-DPCR). Meanwhile, our proposed Equivalent Aggregation Theorem demonstrates that DPCR effectively reduces the overall error added to parameter models and improves FL's accuracy. To improve FL-DPCR's learning effectiveness, we introduce Matrix Mechanism to construct a release strategy and design a binary-indexed-tree (BIT) based DPCR model for Gaussian mechanism (BCRG). By solving a complex nonlinear programming problem with negative exponents, BCRG achieves optimal release accuracy efficiently. Besides, we exploit the residual privacy budget to boost the accuracy further and propose an advanced BCRG version (ABCRG). Our experiments show that, compared to traditional FL with DP, our achievements improve the accuracy with gains ranging from $3.4\%$ on FMNIST to $65.7\%$ on PAMAP2.
更多
查看译文
关键词
Federated learning,Differential Privacy,Continuous Data Release,Binary Indexed Tree,Matrix Mechanism
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要