Mixed Quantization Enabled Federated Learning to Tackle Gradient Inversion Attacks.

CVPR Workshops(2023)

引用 0|浏览2
暂无评分
摘要
Federated Learning (FL) enables collaborative model building among a large number of participants without the need for explicit data sharing. But this approach shows vulnerabilities when gradient inversion attacks are applied to it. FL models are at higher risk in the event of a gradient inversion attacks, which has a higher success rate in retrieving sensitive data from the model gradients, due to the presence of communication in their inherent architecture. The most alarming thing about this gradient inversion attack is that it can be performed in such a covert way that it does not hamper the training performance while the attackers backtrack from the gradients to get information about the raw data. Some of the common existing approaches proposed to prevent data reconstruction in the context of FL are adding noise with differential privacy, homomorphic encryption, and gradient pruning. These approaches suffer from some major drawbacks, including a tedious key generation process during encryption with an increasing number of clients, a significant performance drop, and difficulty in selecting a suitable pruning ratio. As a countermeasure, we propose a mixed quantization enabled FL scheme, and we empirically show that issues addressed above can be resolved. In addition, our approach can ensure more robustness as different layers of the deep model are quantized with different precisions and quantization modes. We empirically proved the validity of our defense method against both the iteration based and recursion based gradient inversion attacks and evaluated the performance of our proposed FL framework on three benchmark datasets and found out that our approach outperformed the baseline defense mechanisms.
更多
查看译文
关键词
baseline defense mechanisms,collaborative model building,data reconstruction,deep model,defense method,differential privacy,explicit data sharing,FL framework,FL models,gradient pruning,homomorphic encryption,inherent architecture,iteration based gradient inversion attacks,key generation process,mixed quantization enabled federated learning,mixed quantization enabled FL scheme,model gradients,quantization modes,recursion based gradient inversion attacks,sensitive data,suitable pruning ratio
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要