Stochastic Admm For Byzantine-Robust Distributed Learning

2020 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING(2020)

引用 2|浏览81
暂无评分
摘要
In this paper, we aim at solving a distributed machine learning problem under Byzantine attacks. In the distributed system, a number of workers (termed as Byzantine workers) could send arbitrary messages to the master and bias the learning process, due to data corruptions, computation errors or malicious attacks. Prior work has considered a total variation (TV) norm-penalized approximation formulation to handle Byzantine attacks, where the TV norm penalty forces the regular workers' local variables to be close, and meanwhile, tolerates the outliers sent by the Byzantine workers. The s-tochastic subgradient method, which does not consider the problem structure, is shown to be able to solve the TV norm-penalized approximation formulation. In this paper, we propose a stochastic alternating direction method of multipliers (ADMM) that utilizes the special structure of the TV norm penalty. The stochastic ADMM iterates are further simplified, such that the iteration-wise communication and computation costs are the same as those of the stochastic subgradient method. Numerical experiments on the COVERTYPE and MNIST dataset demonstrate the resilience of the proposed s-tochastic ADMM to various Byzantine attacks.
更多
查看译文
关键词
Distributed stochastic learning, alternating direction method of multipliers (ADMM), Byzantine attacks
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要