On Stochastic and Deterministic Quasi-Newton Methods for Nonstrongly Convex Optimization: Asymptotic Convergence and Rate Analysis

arXiv: Optimization and Control(2020)

引用 8|浏览18
暂无评分
摘要
Motivated by applications arising from large-scale optimization and machine learning, we consider stochastic quasi-Newton (SQN) methods for solving unconstrained convex optimization problems. Much of the convergence analysis of SQN methods, in both full and limited-memory regimes, requires the objective function to be strongly convex. However, this assumption is fairly restrictive and does not hold in many applications. To the best of our knowledge, no rate statements currently exist for SQN methods in the absence of such an assumption. Furthermore, among the existing first-order methods for addressing stochastic optimization problems with merely convex objectives, techniques equipped with provable convergence rates employ averaging. However, this averaging technique has a detrimental impact on inducing sparsity. Motivated by these gaps, we consider optimization problems with non-strongly convex objectives with Lipschitz but possibly unbounded gradients. The main contributions of the paper are as follows: (i) To address large-scale stochastic optimization problems, we develop an iteratively regularized stochastic limited-memory BFGS (IRS-LBFGS) algorithm, where the step size, regularization parameter, and the Hessian inverse approximation are updated iteratively. We establish convergence of the iterates (with no averaging) to an optimal solution of the original problem both in an almost-sure sense and in a mean sense. The convergence rate is derived in terms of the objective function value and is shown to be O(1/k((1)(/3-epsilon))), where epsilon is an arbitrary small positive scalar. (ii) In deterministic regimes, we show that the algorithm displays a rate O(1/k(1-)(epsilon)). We present numerical experiments performed on a large-scale text classification problem and compare IRS-LBFGS with standard SQN methods as well as first-order methods such as SAGA and IAG.
更多
查看译文
关键词
stochastic optimization,quasi-Newton,regularization,large scale optimization
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要