GFL-ALDPA: a gradient compression federated learning framework based on adaptive local differential privacy budget allocation

Jiawei Yang, Shuhong Chen,Guojun Wang,Zijia Wang,Zhiyong Jie,Muhammad Arif

Multimedia Tools and Applications(2024)

引用 0|浏览6
暂无评分
摘要
Federated learning(FL) is a popular distributed machine learning framework which can protect users’ private data from being exposed to adversaries. However, related work shows that sensitive private information can still be compromised by analyzing parameters uploaded by clients. Applying differential privacy to federated learning has been a popular privacy-preserving way to achieve strict privacy guarantees in recent years. To reduce the impact of noise, this paper proposes to apply local differential privacy(LDP) to federated learning. We propose a gradient compression federated learning framework based on adaptive local differential privacy budget allocation(GFL-ALDPA). We propose a novel adaptive privacy budget allocation scheme based on communication rounds to reduce the loss of privacy budget and the amount of model noise. It can maximize the limited privacy budget and improve the model accuracy by assigning different privacy budgets to different communication rounds during training. Furthermore, we also propose a gradient compression mechanism based on dimension reduction, which can reduce the communication cost, overall noise size, and loss of the total privacy budget of the model simultaneously to ensure accuracy under a specific privacy-preserving guarantee. Finally, this paper presents the experimental evaluation on the MINIST dataset. Theoretical analysis and experiments demonstrate that our framework can achieve a better trade-off between privacy preservation, communication efficiency, and model accuracy.
更多
查看译文
关键词
Federated learning,Differential privacy,Privacy-preserving,Gradient compression,Privacy budget allocation
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要