Byrdie: A Byzantine-Resilient Distributed Learning Algorithm

2018 IEEE DATA SCIENCE WORKSHOP (DSW)(2018)

引用 7|浏览1
暂无评分
摘要
In this paper, a Byzantine-resilient distributed coordinate descent (ByRDiE) algorithm is introduced to accomplish machine learning tasks in a fully distributed fashion when there are Byzantine failures in the network. When data is distributed over a network, it is sometimes desirable to implement a fully distributed learning algorithm that does not require sharing of raw data among the network entities. To this end, existing distributed algorithms usually count on the cooperation of all nodes in the network. However, real-world applications often encounter situations where some nodes are either not reliable or are malicious. Such situations, in which some nodes do not behave as intended, can be modeled as having undergone Byzantine failures. Generally, Byzantine failures are hard to detect and can lead to break down of distributed learning algorithms. In this paper, it is shown that ByRDiE can provably tolerate Byzantine failures in the network under certain assumptions on the network topology and the machine learning tasks. ByRDiE accomplishes this by incorporating a local "screening" step into the update of a distributed coordinate descent algorithm. Finally, numerical results reported in the paper confirm the robustness of ByRDiE to Byzantine failures.
更多
查看译文
关键词
Byzantine failure, distributed optimization, empirical risk minimization, machine learning, multiagent networks
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要