Improved Sample Complexity for Stochastic Compositional Variance Reduced Gradient

2020 American Control Conference (ACC)(2020)

引用 4|浏览59
暂无评分
摘要
Convex composition optimization is an emerging topic that covers a wide range of applications arising from stochastic optimal control, reinforcement learning and multistage stochastic programming. Existing algorithms suffer from unsatisfactory sample complexity and practical issues since they ignore the convexity structure in the algorithmic design. In this paper, we develop a new stochastic compositional variance-reduced gradient algorithm with the sample complexity of O((m + n)log(1/ε) + 1/ε 3 ) where m + n is the total number of samples. Our algorithm is near-optimal as the dependence on m + n is optimal up to a logarithmic factor. Experimental results on real-world datasets demonstrate the effectiveness and efficiency of the new algorithm.
更多
查看译文
关键词
Complexity theory,Optimization,Stochastic processes,Investment,Mathematical model,Robustness,Fans
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要