Federated Unlearning and Its Privacy Threats

IEEE Network(2023)

引用 1|浏览1
暂无评分
摘要
Federated unlearning has emerged very recently as an attempt to realize "the right to be forgotten" in the context of federated learning. While the current literature is making efforts on designing efficient retraining or approximate unlearning approaches, they ignore the information leakage risks brought by the discrepancy between the models before and after unlearning. In this paper, we perform a comprehensive review of prior studies on federated unlearning and privacy leakage from model updating. We propose new taxonomies to categorize and summarize the state-of-the-art federated unlearning algorithms. We present our findings on the inherent vulnerability to inference attacks of the federated unlearning paradigm and summarize defense techniques with the potential of preventing information leakage. Finally, we suggest a privacy preserving federated unlearning framework with promising research directions to facilitate further studies as future work.
更多
查看译文
关键词
privacy threats
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要