A Comprehensive Study of Gradient Inversion Attacks in Federated Learning and Baseline Defense Strategies

2023 57th Annual Conference on Information Sciences and Systems (CISS)(2023)

引用 0|浏览4
暂无评分
摘要
With a greater emphasis on data confidentiality and legislation, collaborative machine learning algorithms are being developed to protect sensitive private data. Federated learning (FL) is the most popular of these methods, and FL enables collaborative model construction among a large number of users without the requirement for explicit data sharing. Because FL models are built in a distributed manner with gradient sharing protocol, they are vulnerable to “gradient inversion attacks,” where sensitive training data is extracted from raw gradients. Gradient inversion attacks to reconstruct data are regarded as one of the wickedest privacy risks in FL, as attackers covertly spy gradient updates and backtrack from the gradients to obtain information about the raw data without compromising model training quality. Even without prior knowledge about the private data, the attacker can breach the secrecy and confidentiality of the training data via the intermediate gradients. Existing FL training protocol have been proven to exhibit vulnerabilities that can be exploited by adversaries both within and outside the system to compromise data privacy. Thus, it is critical to make FL system designers aware of the implications of future FL algorithm design on privacy preservation. Motivated by this, our work focuses on exploring the data confidentiality and integrity in FL, where we emphasize the intuitions, approaches, and fundamental assumptions used by the existing strategies of gradient inversion attacks to retrieve the data. Then we examine the limitations of different approaches and evaluate their qualitative performance in retrieving raw data. Furthermore, we assessed the effectiveness of baseline defense mechanisms against these attacks for robust privacy preservation in FL.
更多
查看译文
关键词
Model inversion attacks,Gradient leakage attacks,Mixed quantization,Federated learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要