On the Byzantine-Resilience of Distillation-Based Federated Learning
CoRR(2024)
摘要
Federated Learning (FL) algorithms using Knowledge Distillation (KD) have
received increasing attention due to their favorable properties with respect to
privacy, non-i.i.d. data and communication cost. These methods depart from
transmitting model parameters and, instead, communicate information about a
learning task by sharing predictions on a public dataset. In this work, we
study the performance of such approaches in the byzantine setting, where a
subset of the clients act in an adversarial manner aiming to disrupt the
learning process. We show that KD-based FL algorithms are remarkably resilient
and analyze how byzantine clients can influence the learning process compared
to Federated Averaging. Based on these insights, we introduce two new byzantine
attacks and demonstrate that they are effective against prior
byzantine-resilient methods. Additionally, we propose FilterExp, a novel method
designed to enhance the byzantine resilience of KD-based FL algorithms and
demonstrate its efficacy. Finally, we provide a general method to make attacks
harder to detect, improving their effectiveness.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要