Faster Convergence on Differential Privacy-Based Federated Learning.

IEEE Internet Things J.(2024)

Cited 0|Views1
No score
Abstract
As a novel distributed machine learning approach, federated learning (FL) is proposed to train a global model while preserving data privacy. However, some studies manifest that adversaries can still recover private information from the gradients. Differential privacy (DP) is a rigorous mathematical tool to protect records in a database against leakage. It has been widely applied in FL by perturbing the gradients. Nevertheless, while using DP in FL, the convergence performance of the global model is inevitably degraded. In this paper, we implement a DP-based FL scheme, which achieves local DP (LDP) by adding well-designed Gaussian noise on the gradients before clients upload them to the server. After that, we propose two strategies to improve the convergence performance of the DP-based FL. Both methods are realized by modifying the local objective function to limit the effect of LDP noise on convergence without degrading the privacy protection level. We then provide the detailed framework which adopts the LDP scheme and two strategies. The framework on different machine learning models is tested by simulation results, which show that our framework can improve the convergence performance up to 40% faster under different noise compared with other DP-based FL. Finally, we show the theoretical convergence guarantee of our proposed framework by first presenting the expected decrease in the global loss function for one round of training and then providing the upper convergence bound after multiple communication rounds.
More
Translated text
Key words
Privacy-preserving federated learning,differential privacy,convergence performance
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined