An Adaptive Cubic Overestimation (ACO) algorithm for unconstrained optimization is proposed, generalizing at the same time an unpublished method due to Griewank (Technical Report NA/12, 1981, DAMTP, Univ. of Cambridge), an algorithm by Nesterov & Polyak (Math. Programming 108(1), 2006, pp 177-205) and a proposal by Weiser, Deuflhard & Erdmann (Optim. Methods Softw. 22(3), 2007, pp 413-431). At each iteration of our approach, an approximate global minimizer of a local cubic overestimator of the objective function is determined, and this ensures a significant improvement in the objective so long as the Hessian of the objective is locally Lipschitz continuous. The new method uses an adaptive estimation of the local Lipschitz constant and approximations to the global model-minimizer which remain computationally-viable even for large-scale problems. We show that the excellent global and local convergence properties and worst-case iteration complexity bounds obtained by Nesterov & Polyak are retained, and sometimes extended to a wider class of problems, by our ACO approach. Numerical experiments with small-scale test problems from the CUTEr set show superior performance of the ACO algorithm when compared to a trust-region implementation.
Citation
Technical Report RAL-TR-2007-007 Rutherford Appleton Laboratory, Chilton, Oxfordshire, OX11 0QX, England
Article
View Adaptive cubic overestimation methods for unconstrained optimization