Convergence Rates For Distributed Stochastic Optimization Over Random Networks

2018 IEEE CONFERENCE ON DECISION AND CONTROL (CDC)(2018)

引用 54|浏览29
暂无评分
摘要
We establish the O(1/k) convergence rate for distributed stochastic gradient methods that operate over strongly convex costs and random networks. The considered class of methods is standard - each node performs a weighted average of its own and its neighbors' solution estimates (consensus), and takes a negative step with respect to a noisy version of its local function's gradient (innovation). The underlying communication network is modeled through a sequence of temporally independent identically distributed (i.i.d.) Laplacian matrices such that the underlying graphs are connected on average; the local gradient noises are also i.i.d. in time, have finite second moment, and possibly unbounded support. We show that, after a careful setting of the consensus and innovations potentials (weights), the distributed stochastic gradient method achieves a (order-optimal) O(1/k) convergence rate in the mean square distance from the solution. To the best of our knowledge, this is the first order-optimal convergence rate result on distributed strongly convex stochastic optimization when the network is random and the gradient noises have unbounded support. Simulation examples confirm the theoretical findings.
更多
查看译文
关键词
stochastic optimization
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要