Sharp worst-case evaluation complexity bounds for arbitrary-order nonconvex optimization with inexpensive constraints

We provide sharp worst-case evaluation complexity bounds for nonconvex minimization problems with general inexpensive constraints, i.e.\ problems where the cost of evaluating/enforcing of the (possibly nonconvex or even disconnected) constraints, if any, is negligible compared to that of evaluating the objective function. These bounds unify, extend or improve all known upper and lower complexity bounds … Read more

Convergence Rate Analysis of a Stochastic Trust Region Method via Supermartingales

We propose a novel framework for analyzing convergence rates of stochastic optimization algorithms with adaptive step sizes. This framework is based on analysing properties of an underlying generic stochastic process, in particular by deriving a bound on the expected stopping time of this process. We utilise this framework to analyse the bounds on expected global … Read more

Improving the Flexibility and Robustness of Model-Based Derivative-Free Optimization Solvers

We present DFO-LS, a software package for derivative-free optimization (DFO) for nonlinear Least-Squares (LS) problems, with optional bound constraints. Inspired by the Gauss-Newton method, DFO-LS constructs simplified linear regression models for the residuals. DFO-LS allows flexible initialization for expensive problems, whereby it can begin making progress from as few as two objective evaluations. Numerical results … Read more

A derivative-free Gauss-Newton method

We present DFO-GN, a derivative-free version of the Gauss-Newton method for solving nonlinear least-squares problems. As is common in derivative-free optimization, DFO-GN uses interpolation of function values to build a model of the objective, which is then used within a trust-region framework to give a globally-convergent algorithm requiring $O(\epsilon^{-2})$ iterations to reach approximate first-order criticality … Read more

Worst-case evaluation complexity and optimality of second-order methods for nonconvex smooth optimization

We establish or refute the optimality of inexact second-order methods for unconstrained nonconvex optimization from the point of view of worst-case evaluation complexity, improving and generalizing the results of Cartis, Gould and Toint (2010,2011). To this aim, we consider a new general class of inexact second-order algorithms for unconstrained optimization that includes regularization and trust-region … Read more

Improved second-order evaluation complexity for unconstrained nonlinear optimization using high-order regularized models

The unconstrained minimization of a sufficiently smooth objective function $f(x)$ is considered, for which derivatives up to order $p$, $p\geq 2$, are assumed to be available. An adaptive regularization algorithm is proposed that uses Taylor models of the objective of order $p$ and that is guaranteed to find a first- and second-order critical point in … Read more

Optimality of orders one to three and beyond: characterization and evaluation complexity in constrained nonconvex optimization

Necessary conditions for high-order optimality in smooth nonlinear constrained optimization are explored and their inherent intricacy discussed. A two-phase minimization algorithm is proposed which can achieve approximate first-, second- and third-order criticality and its evaluation complexity is analyzed as a function of the choice (among existing methods) of an inner algorithm for solving subproblems in … Read more

Universal regularization methods – varying the power, the smoothness and the accuracy

Adaptive cubic regularization methods have emerged as a credible alternative to linesearch and trust-region for smooth nonconvex optimization, with optimal complexity amongst second-order methods. Here we consider a general/new class of adaptive regularization methods, that use first- or higher-order local Taylor models of the objective regularized by a(ny) power of the step size and applied … Read more

Second-order optimality and beyond: characterization and evaluation complexity in convexly-constrained nonlinear optimization

High-order optimality conditions for convexly-constrained nonlinear optimization problems are analyzed. A corresponding (expensive) measure of criticality for arbitrary order is proposed and extended to define high-order $\epsilon$-approximate critical points. This new measure is then used within a conceptual trust-region algorithm to show that, if derivatives of the objective function up to order $q \geq 1$ … Read more

Improved worst-case evaluation complexity for potentially rank-deficient nonlinear least-Euclidean-norm problems using higher-order regularized models

We present an improved evaluation complexity bound for nonlinear least squares problems using higher order regularization methods. Citation Technical Report NA 15-17, Numerical Analysis Group, University of Oxford, 2015 Article Download View Improved worst-case evaluation complexity for potentially rank-deficient nonlinear least-Euclidean-norm problems using higher-order regularized models