Mirror descent in non-convex stochastic programming.

arXiv: Optimization and Control(2017)

引用 23|浏览40
暂无评分
摘要
In this paper, we examine a class of nonconvex stochastic opti-mization problems which we callvariationally coherent, and which properlyincludes all quasi-convex programs. In view of solving such problems, we focuson the widely usedstochastic mirror descent(SMD) family of algorithms, andwe establish that the method’s last iterate converges with probability1. Wefurther introduce a localized version of variational coherence which ensureslocal convergence of SMD with high probability. These results contribute tothe landscape of nonconvex stochastic optimization by showing that quasicon-vexity is not essential for convergence: rather, variational coherence, a muchweaker requirement, suffices. Finally, building on the above, we reveal aninteresting insight regarding the convergence speed of SMD: in variationallycoherent problems with sharp minima (e.g. generic linear programs), the lastiterate of SMD reaches an exact global optimum in a finite number of steps(a.s.), even in the presence of persistent noise. This result is to be contrastedwith existing work on black-box stochastic linear programs which only exhibitasymptotic convergence rates.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要