Privacy-Enhancing and Robust Backdoor Defense for Federated Learning on Heterogeneous Data

IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY(2024)

引用 0|浏览13
暂无评分
摘要
Federated learning (FL) allows multiple clients to train deep learning models collaboratively while protecting sensitive local datasets. However, FL has been highly susceptible to security for federated backdoor attacks (FBA) through injecting triggers and privacy for potential data leakage from uploaded models in practical application scenarios. FBA defense strategies consider specific and limited attacker models, and a sufficient amount of noise injected can only mitigate rather than eliminate the attack. To address these deficiencies, we introduce a Robust Federated Backdoor Defense Scheme (RFBDS) and Privacy preserving RFBDS (PrivRFBDS) to ensure the elimination of adversarial backdoors. Our RFBDS to overcome FBA consists of amplified magnitude sparsification, adaptive OPTICS clustering, and adaptive clipping. The experimental evaluation of RFBDS is conducted on three benchmark datasets and an extensive comparison is made with state-of-the-art studies. The results demonstrate the promising defense performance from RFBDS, moderately improved by 31.75% similar to 73.75% in clustering defense methods, and 0.03% similar to 56.90% for Non-IID to the utmost extent for the average FBA success rate over MNIST, FMNIST, and CIFAR10. Besides, our privacy-preserving shuffling in PrivRFBDS maintains is 7.83e-5 similar to 0.42x that of state-of-the-art works.
更多
查看译文
关键词
Federate learning,backdoor defense,distributed backdoor attack,privacy-preserving,heterogeneity data
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要