A novel local differential privacy federated learning under multi-privacy regimes.

Expert Syst. Appl.(2023)

引用 1|浏览43
暂无评分
摘要
Local differential privacy federated learning (LDP-FL) is a framework to achieve high local data privacy protection while training the model in a decentralized environment. Currently, LDP-FL’s trainings are suffering from efficiency problems due to many existing researches combine LDP and FL without looking deep into the relationships between the two most important parameters, i.e., privacy budget for privacy protection and gradients for model training. In this work, we propose a novel LDP-FL under multi-privacy regimes to combat the above problems. Firstly, unlike the existing multiple privacy regimes-based LDP-FL to compute the non-unbiased global gradient, we propose an unbiased mean estimator using maximum likelihood estimation (MLE) to obtain small variance global gradients with a higher training accuracy. Secondly, to improve the efficiency of model training for multi-privacy scenarios, we design two different dynamic privacy budget allocation approaches for users to choose from. The first approach allocates the privacy budget based on the training model’s accuracy, and the second approach’s privacy budget grows linearly, avoiding the computational effort caused by the comparison operation. Finally, since directly perturbing the high-dimensional local gradients in traditional methods would lead to considerable utility loss, we propose a layered dimension selection strategy by randomly selecting the layers of gradients that take part in the noise perturbation while others remain untouched. In simulations using the handwritten MNIST and Fashion-MNIST datasets, we compare our framework with the traditional LDP-FL, simple personalized mean estimation (S-PME), and PLU-FedOA. The results demonstrate the training efficiency of our framework.
更多
查看译文
关键词
novel local differential privacy,learning,multi-privacy
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要