FedPerturb: Covert Poisoning Attack on Federated Learning via Partial Perturbation

Tongsai Jin,Fu Zhihui, Dan Meng,Jun Wang, Yue Qi,Guitao Cao

ECAI 2023(2023)

Cited 0|Views32
No score
Abstract
Federated learning breaks through the barrier of data owners by allowing them to collaboratively train a federated machine learning model without compromising the privacy of their own data. However, Federation Learning also faces the threat of poisoning attacks, especially from the client model updates, which may impair the accuracy of the global model. To defend against the poisoning attacks, previous work aims to identify the malicious updates in high dimensional spaces. However, we find that the distances in high dimensional spaces cannot identify the changes in a small subset of dimensions, and the small changes may affect the global models severely. Based on this finding, we propose an untargeted poisoning attack under the federated learning setting via the partial perturbations on a small subset of the carefully selected model parameters, and present two attack object selection strategies. We experimentally demonstrate that the proposed attack scheme achieves high attack success rate on five state-of-the-art defense schemes. Furthermore, the proposed attack scheme remains effective at low malicious client ratios and still circumvents three defense schemes with a malicious client ratio as low as 2%.
More
Translated text
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined