FedRDF: A Robust and Dynamic Aggregation Function against Poisoning Attacks in Federated Learning
CoRR(2024)
摘要
Federated Learning (FL) represents a promising approach to typical privacy
concerns associated with centralized Machine Learning (ML) deployments. Despite
its well-known advantages, FL is vulnerable to security attacks such as
Byzantine behaviors and poisoning attacks, which can significantly degrade
model performance and hinder convergence. The effectiveness of existing
approaches to mitigate complex attacks, such as median, trimmed mean, or Krum
aggregation functions, has been only partially demonstrated in the case of
specific attacks. Our study introduces a novel robust aggregation mechanism
utilizing the Fourier Transform (FT), which is able to effectively handling
sophisticated attacks without prior knowledge of the number of attackers.
Employing this data technique, weights generated by FL clients are projected
into the frequency domain to ascertain their density function, selecting the
one exhibiting the highest frequency. Consequently, malicious clients' weights
are excluded. Our proposed approach was tested against various model poisoning
attacks, demonstrating superior performance over state-of-the-art aggregation
methods.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要