FedPerturb: Covert Poisoning Attack on Federated Learning via Partial Perturbation

Tongsai Jin,Fu Zhihui,Dan Meng,Jun Wang, Yue Qi,Guitao Cao

ECAI 2023(2023)

引用 0|浏览23
暂无评分
摘要
Federated learning breaks through the barrier of data owners by allowing them to collaboratively train a federated machine learning model without compromising the privacy of their own data. However, Federation Learning also faces the threat of poisoning attacks, especially from the client model updates, which may impair the accuracy of the global model. To defend against the poisoning attacks, previous work aims to identify the malicious updates in high dimensional spaces. However, we find that the distances in high dimensional spaces cannot identify the changes in a small subset of dimensions, and the small changes may affect the global models severely. Based on this finding, we propose an untargeted poisoning attack under the federated learning setting via the partial perturbations on a small subset of the carefully selected model parameters, and present two attack object selection strategies. We experimentally demonstrate that the proposed attack scheme achieves high attack success rate on five state-of-the-art defense schemes. Furthermore, the proposed attack scheme remains effective at low malicious client ratios and still circumvents three defense schemes with a malicious client ratio as low as 2%.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要