Newton-MR: Newton's Method Without Smoothness or Convexity.
arXiv: Optimization and Control(2018)
摘要
Establishing global convergence of the classical Newtonu0027s method has long been limited to making (strong) convexity assumptions. This has limited the application range of Newtonu0027s method in its classical form. Hence, many Newton-type variants have been proposed which aim at extending the classical Newtonu0027s method beyond (strongly) convex problems. Furthermore, as a common denominator, the analysis of almost all these methods relies heavily on the Lipschitz continuity assumptions of the gradient and Hessian. In fact, it is widely believed that in the absence of well-behaved and continuous Hessian, the application of curvature can hurt more so that it can help. Here, we show that two seemingly simple modifications of the classical Newtonu0027s method result in an algorithm, called Newton-MR, which can readily be applied to invex problems. Newton-MR appears almost indistinguishable from the classical Newtonu0027s method, yet it offers a diverse range of algorithmic and theoretical advantages. In particular, not only Newton-MRu0027s application extends far beyond convexity, but also it is more suitable than the classical Newtonu0027s method for (strongly) convex problems. Furthermore, by introducing a much weaker notion of joint regularity of Hessian and gradient, we show that the global convergence of Newton-MR can be established even in the absence of continuity assumptions of the gradient and/or Hessian. We further obtain local convergence guarantees of Newton-MR and show that our local analysis indeed generalizes that of the classical Newtonu0027s method. Specifically, our analysis does not make use of the notion of isolated minimum, which is required for the local convergence analysis of the classical Newtonu0027s method.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络