Convergence of the restricted Nelder-Mead algorithm in two dimensions

The Nelder-Mead algorithm, a longstanding direct search method for unconstrained optimization published in 1965, is designed to minimize a scalar-valued function $f$ of $n$ real variables using only function values, without any derivative information. Each Nelder--Mead iteration is associated with a nondegenerate simplex defined by $n + 1$ vertices and their function values; a typical iteration produces a new simplex by replacing the worst vertex by a new point. Despite the method's widespread use, theoretical results have been limited: for strictly convex objective functions of one variable with bounded level sets, the algorithm always converges to the minimizer; for such functions of two variables, the diameter of the simplex converges to zero, but examples constructed by McKinnon show that the algorithm may converge to a nonminimizing point. This paper considers the restricted Nelder--Mead algorithm, a variant that does not allow expansion steps. In two dimensions we show that, for any nondegenerate starting simplex and any twice-continuously differentiable function with positive definite Hessian and bounded level sets, the algorithm always converges to the minimizer. The proof is based on treating the method as a discrete dynamical system, and relies on several techniques that are non-standard in convergence proofs for unconstrained optimization.

Citation

Preprint, submitted to arXiv.org, April 3, 2011; http://arxiv.org/abs/1104.0350

Article

Download

View Convergence of the restricted Nelder-Mead algorithm in two dimensions