Global Convergence of General Derivative-Free Trust-Region Algorithms to First and Second Order Critical Points

In this paper we prove global convergence for first and second-order stationarity points of a class of derivative-free trust-region methods for unconstrained optimization. These methods are based on the sequential minimization of linear or quadratic models built from evaluating the objective function at sample sets. The derivative-free models are required to satisfy Taylor-type bounds but, apart from that, the analysis is independent of the sampling techniques. A number of new issues are addressed, including global convergence when acceptance of iterates is based on simple decrease of the objective function, trust-region radius maintenance at the criticality step, and global convergence for second-order critical points.

Citation

Preprint 06-49, Department of Mathematics, University of Coimbra, Portugal, October 2006

Article

Download

View Global Convergence of General Derivative-Free Trust-Region Algorithms to First and Second Order Critical Points