Fault-Tolerant Multi-Agent Optimization: Optimal Iterative Distributed Algorithms

PODC(2016)

引用 60|浏览140
暂无评分
摘要
This paper addresses the problem of distributed multi-agent optimization in which each agent i has a local cost function h(i)(x), and the goal is to optimize a global cost function consisting of an average of the local cost functions. Such optimization problems are of interest in many contexts, including distributed machine learning and distributed robotics.We consider the distributed optimization problem in the presence of faulty agents. We focus primarily on Byzantine failures, but also briefly discuss some results for crash failures. For the Byzantine fault-tolerant optimization problem, the ideal goal is to optimize the average of local cost functions of the non-faulty agents. However, this goal also cannot be achieved. Therefore, we consider a relaxed version of the fault-tolerant optimization problem.The goal for the relaxed problem is to generate an output that is an optimum of a global cost function formed as a convex combination of local cost functions of the non-faulty agents. More precisely, if N denotes the set of non-faulty agents in a given execution, then there must exist weights alpha(i) for i is an element of N such that alpha(i) >= 0 and Sigma(i is an element of N) alpha(i) = 1, such that the output is an optimum of the cost function Sigma(i is an element of N) alpha(i)h(i)(x). Ideally, we would like alpha(i) = 1/vertical bar N vertical bar for all i is an element of N, however, the maximum number of nonzero weights (alpha(i)'s) that can be guaranteed is vertical bar N vertical bar - f, where f is the maximum number of Byzantine faulty agents.The contribution of this paper is to present an iterative distributed optimization algorithm that achieves optimal fault-tolerance. Specifically, it ensures that at least vertical bar N vertical bar - f agents have weights that are bounded away from 0 (in particular, lower bounded by 1/2 (vertical bar N vertical bar - f)). The proposed distributed algorithm has a simple iterative structure, with each agent maintaining only a small amount of local state. We show that the iterative algorithm ensures two properties as time goes to infinity: consensus (i.e., output of non-faulty agents becomes identical in the time limit), and optimality (in the sense that the output is the optimum of a suitably defined global cost function). After a finite number of iterations, the algorithm satisfies these properties approximately.
更多
查看译文
关键词
Distributed optimization,Byzantine faults,complete networks,fault-tolerant computing
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要