Sine: Similarity is not enough for mitigating Local Model Poisoning Attacks in Federated Learning

IEEE Transactions on Dependable and Secure Computing(2024)

引用 0|浏览0
暂无评分
摘要
Federated learning is a collaborative learning paradigm that brings the model to the edge for training over the participants' local data under the orchestration of a trusted server. Though this paradigm protects data privacy, the aggregator has no control over the local data or model at the edge. So, malicious participants could perturb their locally held data or model to post an insidious update, degrading global model accuracy. Recent Byzantine-robust aggregation rules could defend against data poisoning attacks. Also, model poisoning attacks have become more ingenious and adaptive to the existing defenses. But these attacks are crafted against specific aggregation rules. This work presents a generic model poisoning attack framework named Sine (Similarity is not enough), which harnesses vulnerabilities in cosine similarity to increase the impact of poisoning attacks by 20-30%. Sine makes convergence unachievable by maintaining the persistence of the attack. Further, we propose an effective defense technique called FLTC (FL Trusted Coordinates) to avoid such issues. FLTC selects the trusted coordinates and aggregates them based on the change in their direction and magnitude with respect to a trusted base model update. FLTC could successfully defend against poisoning attacks, including adaptive model poisoning attacks, by restricting the attack impact to 2-4%.
更多
查看译文
关键词
Federated Learning,Local Model Poisoning Attack,Hyper-Spherical Direction Cosine,Byzantine-Robust Aggregation
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要