Optimality of orders one to three and beyond: characterization and evaluation complexity in constrained nonconvex optimization

Necessary conditions for high-order optimality in smooth nonlinear constrained optimization are explored and their inherent intricacy discussed. A two-phase minimization algorithm is proposed which can achieve approximate first-, second- and third-order criticality and its evaluation complexity is analyzed as a function of the choice (among existing methods) of an inner algorithm for solving subproblems in … Read more

Partially separable convexly-constrained optimization with non-Lipschitz singularities and its complexity

An adaptive regularization algorithm using high-order models is proposed for partially separable convexly constrained nonlinear optimization problems whose objective function contains non-Lipschitzian $\ell_q$-norm regularization terms for $q\in (0,1)$. It is shown that the algorithm using an $p$-th order Taylor model for $p$ odd needs in general at most $O(\epsilon^{-(p+1)/p})$ evaluations of the objective function and … Read more

Universal regularization methods – varying the power, the smoothness and the accuracy

Adaptive cubic regularization methods have emerged as a credible alternative to linesearch and trust-region for smooth nonconvex optimization, with optimal complexity amongst second-order methods. Here we consider a general/new class of adaptive regularization methods, that use first- or higher-order local Taylor models of the objective regularized by a(ny) power of the step size and applied … Read more

Approximate norm descent methods for constrained nonlinear systems

We address the solution of convex-constrained nonlinear systems of equations where the Jacobian matrix is unavailable or its computation/storage is burdensome. In order to efficiently solve such problems, we propose a new class of algorithms which are “derivative-free” both in the computation of the search direction and in the selection of the steplength. Search directions … Read more

Second-order optimality and beyond: characterization and evaluation complexity in convexly-constrained nonlinear optimization

High-order optimality conditions for convexly-constrained nonlinear optimization problems are analyzed. A corresponding (expensive) measure of criticality for arbitrary order is proposed and extended to define high-order $\epsilon$-approximate critical points. This new measure is then used within a conceptual trust-region algorithm to show that, if derivatives of the objective function up to order $q \geq 1$ … Read more

Improved worst-case evaluation complexity for potentially rank-deficient nonlinear least-Euclidean-norm problems using higher-order regularized models

We present an improved evaluation complexity bound for nonlinear least squares problems using higher order regularization methods. Citation Technical Report NA 15-17, Numerical Analysis Group, University of Oxford, 2015 Article Download View Improved worst-case evaluation complexity for potentially rank-deficient nonlinear least-Euclidean-norm problems using higher-order regularized models

Evaluation complexity bounds for smooth constrained nonlinear optimization using scaled KKT conditions and high-order models

Evaluation complexity for convexly constrained optimization is considered and it is shown first that the complexity bound of $O(\epsilon^{-3/2})$ proved by Cartis, Gould and Toint (IMAJNA 32(4) 2012, pp.1662-1695) for computing an $\epsilon$-approximate first-order critical point can be obtained under significantly weaker assumptions. Moreover, the result is generalized to the case where high-order derivatives are … Read more

Evaluation complexity for nonlinear constrained optimization using unscaled KKT conditions and high-order models

The evaluation complexity of general nonlinear, possibly nonconvex,constrained optimization is analyzed. It is shown that, under suitable smoothness conditions, an $\epsilon$-approximate first-order critical point of the problem can be computed in order $O(\epsilon^{1-2(p+1)/p})$ evaluations of the problem’s function and their first $p$ derivatives. This is achieved by using a two-phases algorithm inspired by Cartis, Gould, … Read more

BFO, a trainable derivative-free Brute Force Optimizer for nonlinear bound-constrained optimization and equilibrium computations with continuous and discrete variables

A direct-search derivative-free Matlab optimizer for bound-constrained problems is described, whose remarkable features are its ability to handle a mix of continuous and discrete variables, a versatile interface as well as a novel self-training option. Its performance compares favourably with that of NOMAD, a state-of-the art package. It is also applicable to multilevel equilibrium- or … Read more

Worst-case evaluation complexity for unconstrained nonlinear optimization using high-order regularized models

The worst-case evaluation complexity for smooth (possibly nonconvex) unconstrained optimization is considered. It is shown that, if one is willing to use derivatives of the objective function up to order $p$ (for $p\geq 1$) and to assume Lipschitz continuity of the $p$-th derivative, then an $\epsilon$-approximate first-order critical point can be computed in at most … Read more