Kick Bad Guys Out! Zero-Knowledge-Proof-Based Anomaly Detection in Federated Learning
CoRR(2023)
摘要
Federated Learning (FL) systems are vulnerable to adversarial attacks, where
malicious clients submit poisoned models to prevent the global model from
converging or plant backdoors to induce the global model to misclassify some
samples. Current defense methods fall short in real-world FL systems, as they
either rely on impractical prior knowledge or introduce accuracy loss even when
no attack happens. Also, these methods do not offer a protocol for verifying
the execution, leaving participants doubtful about the correct execution of the
mechanism. To address these issues, we propose a novel anomaly detection
strategy designed for real-world FL systems. Our approach activates the defense
only upon occurrence of attacks, and removes malicious models accurately,
without affecting the benign ones. Additionally, our approach incorporates
zero-knowledge proofs to ensure the integrity of defense mechanisms.
Experimental results demonstrate the effectiveness of our approach in enhancing
the security of FL systems against adversarial attacks.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要