Sharp worst-case evaluation complexity bounds for arbitrary-order nonconvex optimization with inexpensive constraints

We provide sharp worst-case evaluation complexity bounds for nonconvex minimization problems with general inexpensive constraints, i.e.\ problems where the cost of evaluating/enforcing of the (possibly nonconvex or even disconnected) constraints, if any, is negligible compared to that of evaluating the objective function. These bounds unify, extend or improve all known upper and lower complexity bounds … Read more

Worst-case evaluation complexity and optimality of second-order methods for nonconvex smooth optimization

We establish or refute the optimality of inexact second-order methods for unconstrained nonconvex optimization from the point of view of worst-case evaluation complexity, improving and generalizing the results of Cartis, Gould and Toint (2010,2011). To this aim, we consider a new general class of inexact second-order algorithms for unconstrained optimization that includes regularization and trust-region … Read more

Universal regularization methods – varying the power, the smoothness and the accuracy

Adaptive cubic regularization methods have emerged as a credible alternative to linesearch and trust-region for smooth nonconvex optimization, with optimal complexity amongst second-order methods. Here we consider a general/new class of adaptive regularization methods, that use first- or higher-order local Taylor models of the objective regularized by a(ny) power of the step size and applied … Read more

Corrigendum: On the complexity of finding first-order critical points in constrained nonlinear optimization

In a recent paper (Cartis, Gould and Toint, Math. Prog. A 144(1-2) 93–106, 2014), the evaluation complexity of an algorithm to find an approximate first-order critical point for the general smooth constrained optimization problem was examined. Unfortunately, the proof of Lemma 3.5 in that paper uses a result from an earlier paper in an incorrect … Read more

An example of slow convergence for Newton’s method on a function with globally Lipschitz continuous Hessian

An example is presented where Newton’s method for unconstrained minimization is applied to find an $\epsilon$-approximate first-order critical point of a smooth function and takes a multiple of $\epsilon^{-2}$ iterations and function evaluations to terminate, which is as many as the steepest-descent method in its worst-case. The novel feature of the proposed example is that … Read more

On the evaluation complexity of constrained nonlinear least-squares and general constrained nonlinear optimization using second-order methods

When solving the general smooth nonlinear optimization problem involving equality and/or inequality constraints, an approximate first-order critical point of accuracy $\epsilon$ can be obtained by a second-order method using cubic regularization in at most $O(\epsilon^{-3/2})$ problem-functions evaluations, the same order bound as in the unconstrained case. This result is obtained by first showing that the … Read more

Worst-case evaluation complexity of non-monotone gradient-related algorithms for unconstrained optimization

The worst-case evaluation complexity of finding an approximate first-order critical point using gradient-related non-monotone methods for smooth nonconvex and unconstrained problems is investigated. The analysis covers a practical linesearch implementation of these popular methods, allowing for an unknown number of evaluations of the objective function (and its gradient) per iteration. It is shown that this … Read more

On the evaluation complexity of cubic regularization methods for potentially rank-deficient nonlinear least-squares problems and its relevance to constrained nonlinear optimization

We propose a new termination criteria suitable for potentially singular, zero or non-zero residual, least-squares problems, with which cubic regularization variants take at most $\mathcal{O}(\epsilon^{-3/2})$ residual- and Jacobian-evaluations to drive either the Euclidean norm of the residual or its gradient below $\epsilon$; this is the best-known bound for potentially singular nonlinear least-squares problems. We then … Read more

A Note About The Complexity Of Minimizing Nesterov’s Smooth Chebyshev-Rosenbrock Function

This short note considers and resolves the apparent contradiction between known worst-case complexity results for first and second-order methods for solving unconstrained smooth nonconvex optimization problems and a recent note by Jarre (2011) implying a very large lower bound on the number of iterations required to reach the solution’s neighbourhood for a specific problem with … Read more

On the complexity of finding first-order critical points in constrained nonlinear optimization

The complexity of finding epsilon-approximate first-order critical points for the general smooth constrained optimization problem is shown to be no worse that O(epsilon^{-2}) in terms of function and constraints evaluations. This result is obtained by analyzing the worst-case behaviour of a first-order shorts-step homotopy algorithm consisting of a feasibility phase followed by an optimization phase, … Read more