A smoothing stochastic quasi-newton method for non-lipschitzian stochastic optimization problems
WSC '17: Winter Simulation Conference Las Vegas Nevada December, 2017(2017)
摘要
Motivated by big data applications, we consider unconstrained stochastic optimization problems. Stochastic quasi-Newton methods have proved successful in addressing such problems. However, in both convex and non-convex regimes, most existing convergence theory requires the gradient mapping of the objective function to be Lipschitz continuous, a requirement that might not hold. To address this gap, we consider problems with not necessarily Lipschitzian gradients. Employing a local smoothing technique, we develop a smoothing stochastic quasi-Newton (S-SQN) method. Our main contributions are three-fold: (i) under suitable assumptions, we show that the sequence generated by the S-SQN scheme converges to the unique optimal solution of the smoothed problem almost surely; (ii) we derive an error bound in terms of the smoothed objective function values; and (iii) to quantify the solution quality, we derive a bound that relates the iterate generated by the S-SQN method to the optimal solution of the original problem.
更多查看译文
关键词
nonconvex regimes,convergence theory,Lipschitzian gradients,iterate generation,S-SQN method,smoothed objective function values,smoothed problem,S-SQN scheme converges,local smoothing technique,gradient mapping,stochastic quasiNewton methods,unconstrained stochastic optimization problems,big data applications,nonLipschitzian stochastic optimization problems,smoothing stochastic quasinewton method
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络