Trust-Region Newton-CG with Strong Second-Order Complexity Guarantees for Nonconvex Optimization

Worst-case complexity guarantees for nonconvex optimization algorithms have been a topic of growing interest. Multiple frameworks that achieve the best known complexity bounds among a broad class of first- and second-order strategies have been proposed. These methods have often been designed primarily with complexity guarantees in mind and, as a result, represent a departure from the algorithms that have proved to be the most effective in practice. In this paper, we consider trust-region Newton methods, one of the most popular classes of algorithms for solving nonconvex optimization problems. By introducing slight modifications to the original scheme, we obtain two methods—one based on exact subproblem solves and one exploiting inexact subproblem solves as in the popular “trust-region Newton-Conjugate-Gradient” (trust-region Newton-CG) method—with iteration and operation complexity bounds that match the best known bounds for the aforementioned class of first- and second-order methods. The resulting trust-region Newton-CG method also retains the attractive practical behavior of classical trust-region Newton-CG, which we demonstrate with numerical comparisons on a standard benchmark test set.

Citation

Lehigh ISE Technical Report 19T-028

Article

Download

View PDF