GradDiff: Gradient-based membership inference attacks against federated distillation with differential comparison

Xiaodong Wang,Longfei Wu,Zhitao Guan

INFORMATION SCIENCES(2024)

引用 0|浏览10
暂无评分
摘要
Membership inference attacks (MIAs) has demonstrated a great threat to federated learning (FL) and its extension federated distillation (FD). However, existing research on MIAs against FD is insufficient. In this paper, we propose a novel membership inference attack named GradDiff, which is a passive gradient-based MIA employing differential comparison. Additionally, to make full use of the federated training process, we also design the gradient drift attack (GradDrift), an active version of GradDiff, in which the attacker modifies the target model by gradient tuning and is able to obtain more information about membership privacy. We conduct extensive experiments on three real-world datasets to evaluate the effectiveness of the proposed attacks. The results show that our proposed attacks can outperform the existing baseline methods in terms of precision and recall. Besides, we perform a thorough investigation of the factors that may influence the performance of MIAs against FD.
更多
查看译文
关键词
Membership inference attack,Federated distillation,Membership privacy,Privacy leakage
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要