EGIA: An External Gradient Inversion Attack in Federated Learning

IEEE Trans. Inf. Forensics Secur.(2023)

引用 1|浏览3
暂无评分
摘要
Federated learning (FL) has achieved state-of-the-art performance in distributed learning tasks with privacy requirements. However, it has been discovered that FL is vulnerable to adversarial attacks. The typical gradient inversion attacks primarily focus on attempting to obtain the client's private input in a white-box manner, where the adversary is assumed to be either the client or the server. However, if both the clients and the server are honest and fully trusted, is the FL secure? In this paper, we propose a novel method called External Gradient Inversion Attack (EGIA) in the grey-box settings. Specifically, we concentrate on the point that public-shared gradients in FL are always transmitted through the intermediary nodes, which has been widely ignored. On this basis, we demonstrate that an external adversary can reconstruct the private input using gradients even if both the clients and the server are honest and fully trusted. We also provide a comprehensive theoretical analysis of the black-box attack scenario in which the adversary has only the gradients. We perform extensive experiments on multiple real-world datasets to test the effectiveness of EGIA. The outcomes of our experiments validate that the EGIA method is highly effective.
更多
查看译文
关键词
Servers,Glass box,Data privacy,Training,Euclidean distance,Security,Generative adversarial networks,Federated learning,gradient inversion,grey-box attack,black-box attack,external adversary
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要