Decentralized Local Stochastic Extra-Gradient for Variational Inequalities

NeurIPS 2022(2022)

引用 32|浏览36
暂无评分
摘要
We consider distributed stochastic variational inequalities (VIs) on unbounded domain with the problem data being heterogeneous (non-IID) and distributed across many devices. We make very general assumption on the computational network that, in particular, covers the settings of fully decentralized calculations with time-varying networks and centralized topologies commonly used in Federated Learning. Moreover, multiple local updates on the workers can be made for reducing the communication frequency between workers. We extend stochastic extragradient method to this very general setting and theoretically analyze its convergence rate in the strongly monotone, monotone, and non-monotone setting when an Minty solution exists. The provided rates have explicit dependence on\ network characteristics and how it varies with time, data heterogeneity, variance, number of devices, and other standard parameters. As a special case, our method and analysis apply to distributed stochastic saddle-point problems (SPP), e.g., to training Deep Generative Adversarial Networks (GANs) for which the decentralized training has been reported to be extremely challenging. In experiments for decentralized training of GANs we demonstrate the effectiveness of our proposed approach.
更多
查看译文
关键词
convex optimization,variational inequalities,saddle point problems,gossip
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要