Scalable Federated Unlearning via Isolated and Coded Sharding
CoRR(2024)
摘要
Federated unlearning has emerged as a promising paradigm to erase the
client-level data effect without affecting the performance of collaborative
learning models. However, the federated unlearning process often introduces
extensive storage overhead and consumes substantial computational resources,
thus hindering its implementation in practice. To address this issue, this
paper proposes a scalable federated unlearning framework based on isolated
sharding and coded computing. We first divide distributed clients into multiple
isolated shards across stages to reduce the number of clients being affected.
Then, to reduce the storage overhead of the central server, we develop a coded
computing mechanism by compressing the model parameters across different
shards. In addition, we provide the theoretical analysis of time efficiency and
storage effectiveness for the isolated and coded sharding. Finally, extensive
experiments on two typical learning tasks, i.e., classification and generation,
demonstrate that our proposed framework can achieve better performance than
three state-of-the-art frameworks in terms of accuracy, retraining time,
storage overhead, and F1 scores for resisting membership inference attacks.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要