Reconstruct Attack from Historical Updates Against Byzantine Robust Federated Learning

Junhong Situ,Min-Rong Chen, Hancong Chen

2023 2nd International Conference on Frontiers of Communications, Information System and Data Science (CISDS)(2023)

引用 0|浏览0
暂无评分
摘要
Federated Learning (FL) allows for distributed machine learning while preserving privacy by sharing models rather than raw data among participants, breaking down data silos. However, the FL global model is vulnerable to Byzantine attacks and leaves an opportunity for adversaries to disrupt the FL process. Most previous attack designs have largely ignored the significant overhead of constructing an attack, as well as the difficulty of achieving omniscience in real-world conditions, and have instead focused on breaking robust aggregation defenses. In this paper, we propose an energy-efficient non-omniscient attack scheme that can be executed on any device. Our scheme, the Historical Updates Reconstruction Attack (HURA), maximizes historical updates from both local and global sides. The experimental results demonstrate that HURA achieves uncompromising effectiveness in several Byzantine robust FL scenarios with significantly low overhead compared to similar attack schemes.
更多
查看译文
关键词
Federated Learning,Model Poisoning,Byzan-tine Robustness,Non-omniscient Attack
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要