A decoupled first/second-order steps technique for nonconvex nonlinear unconstrained optimization with improved complexity bounds

In order to be provably convergent towards a second-order stationary point, optimization methods applied to nonconvex problems must necessarily exploit both first and second-order information. However, as revealed by recent complexity analyzes of some of these methods, the overall effort to reach second-order points is significantly larger when compared to the one of approaching first-order ones. In addition, there are other algorithmic schemes, initially designed with first-order convergence in mind, that do not appear to maintain the same first-order performance when modified to take second-order information into account. In this paper, we propose a technique that separately computes first and second-order steps, and that globally converges to second-order stationary points. Our approach is shown to lead to an improvement of the corresponding complexity bound with respect to the first-order optimality tolerance. Although the applicability of our ideas is wider, we focus the presentation on trust-region methods with and without derivatives.

Citation

S. Gratton, C. W. Royer, and L. N. Vicente, A decoupled first/second-order steps technique for nonconvex nonlinear unconstrained optimization with improved complexity bounds, preprint 17-21, Dept. Mathematics, Univ. Coimbra.

Article

Download

View A decoupled first/second-order steps technique for nonconvex nonlinear unconstrained optimization with improved complexity bounds