Privacy-Preserving Detection of Poisoning Attacks in Federated Learning

2022 19th Annual International Conference on Privacy, Security & Trust (PST)(2022)

引用 0|浏览11
暂无评分
摘要
With federated learning, local learners train a shared global model using their own data, and report model updates to a server to aggregate and then update the global model. Such a learning paradigm may suffer from two attacks: privacy attacks by the untrusted server; adversarial attacks (e.g., poisoning attacks) by malicious learners. There is extensive research on addressing each of the attacks separately, but there is no scheme that can address both of them. In this paper, we pro-pose a scheme that enables both privacy-preserving aggregation and poisoning attack detection at the server, by utilizing additive homomorphic encryption and a trusted execution environment (TEE). Our evaluation based on an implemented prototype system demonstrates that our scheme can attain a similar level of detection accuracy as the state-of-the-art poisoning detection scheme, and that the increased computational workload can be parallelized and mostly executed outside of the TEE. A privacy analysis shows that the proposed scheme can protect individual learners’ model updates from being exposed.
更多
查看译文
关键词
privacy-preserving detection,poisoning attacks,federated learning,learning paradigm,privacy attacks,untrusted server,adversarial attacks,malicious learners,privacy-preserving aggregation,poisoning attack detection,detection accuracy,privacy analysis,poisoning detection scheme
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要