Second-Order Information in Non-Convex Stochastic Optimization: Power and Limitations

COLT(2020)

引用 51|浏览170
暂无评分
摘要
We design an algorithm which finds an ϵ-approximate stationary point (with ∇ F(x)≤ϵ) using O(ϵ^-3) stochastic gradient and Hessian-vector products, matching guarantees that were previously available only under a stronger assumption of access to multiple queries with the same random seed. We prove a lower bound which establishes that this rate is optimal and—surprisingly—that it cannot be improved using stochastic pth order methods for any p≥ 2, even when the first p derivatives of the objective are Lipschitz. Together, these results characterize the complexity of non-convex stochastic optimization with second-order methods and beyond. Expanding our scope to the oracle complexity of finding (ϵ,γ)-approximate second-order stationary points, we establish nearly matching upper and lower bounds for stochastic second-order methods. Our lower bounds here are novel even in the noiseless case.
更多
查看译文
关键词
optimization,information,second-order,non-convex
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要