Mitigating Model Poisoning Attacks on Distributed Learning with Heterogeneous Data.

International Conference on Machine Learning and Applications(2023)

引用 0|浏览0
暂无评分
摘要
Gradient-based distributed learning techniques have been essential for machine model training on distributed samples without collecting raw data. However, such learning systems are vulnerable to both internal failures and external attacks. In this work, we study the Byzantine robustness of distributed learning over heterogeneous data for classification tasks, where each worker only contains data samples belonging to some specific classes (aka., label skew) and a fraction of workers are corrupted by a Byzantine adversary to conduct attacks by sending malicious gradients. Existing defenses usually fail under such heterogeneous cases. To remedy this, we propose a gradient decomposition scheme called DeSGD to achieve more robust distributed model training. The key idea for mitigating the impact of data heterogeneity on the Byzantine robustness is to divide the full global gradient into individual gradients of each data class and conduct resilient aggregation in a class-wise manner. The proposed framework can easily integrate existing advanced defense methods and local momentum mechanism. Evaluation results on the Fashion-MNIST dataset with various strong attacks demonstrate the improved robustness of learning over distributed data in the presence of both label skew and attacks.
更多
查看译文
关键词
Distributed Optimization,Byzantine Attack,Non-IID Data
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要