Convergence of the Restricted Nelder-Mead Algorithm in Two Dimensions.

SIAM JOURNAL ON OPTIMIZATION(2012)

引用 32|浏览19
暂无评分
摘要
The Nelder-Mead algorithm, a longstanding direct search method for unconstrained optimization published in 1965, is designed to minimize a scalar-valued function f of n real variables using only function values, without any derivative information. Each Nelder-Mead iteration is associated with a nondegenerate simplex defined by n + 1 vertices and their function values; a typical iteration produces a new simplex by replacing the worst vertex by a new point. Despite the method's widespread use, theoretical results have been limited: for strictly convex objective functions of one variable with bounded level sets, the algorithm always converges to the minimizer; for such functions of two variables, the diameter of the simplex converges to zero but examples constructed by McKinnon show that the algorithm may converge to a nonminimizing point. This paper considers the restricted Nelder-Mead algorithm, a variant that does not allow expansion steps. In two dimensions we show that for any nondegenerate starting simplex and any twice-continuously differentiable function with positive definite Hessian and bounded level sets, the algorithm always converges to the minimizer. The proof is based on treating the method as a discrete dynamical system and relies on several techniques that are nonstandard in convergence proofs for unconstrained optimization.
更多
查看译文
关键词
direct search methods,nonderivative optimization,derivative-free optimization,Nelder-Mead method
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要