QPLIB: A Library of Quadratic Programming Instances

This paper describes a new instance library for Quadratic Programming (QP), i.e., the family of continuous and (mixed)-integer optimization problems where the objective function, the constrains, or both are quadratic. QP is a very “varied” class of problems, comprising sub-classes of problems ranging from trivial to undecidable. Solution methods for QP are very diverse, ranging … Read more

Universal regularization methods – varying the power, the smoothness and the accuracy

Adaptive cubic regularization methods have emerged as a credible alternative to linesearch and trust-region for smooth nonconvex optimization, with optimal complexity amongst second-order methods. Here we consider a general/new class of adaptive regularization methods, that use first- or higher-order local Taylor models of the objective regularized by a(ny) power of the step size and applied … Read more

Second-order optimality and beyond: characterization and evaluation complexity in convexly-constrained nonlinear optimization

High-order optimality conditions for convexly-constrained nonlinear optimization problems are analyzed. A corresponding (expensive) measure of criticality for arbitrary order is proposed and extended to define high-order $\epsilon$-approximate critical points. This new measure is then used within a conceptual trust-region algorithm to show that, if derivatives of the objective function up to order $q \geq 1$ … Read more

A Dual Gradient-Projection Method for Large-Scale Strictly Convex Quadratic Problems

The details of a solver for minimizing a strictly convex quadratic objective function subject to general linear constraints is presented. The method uses a gradient projection algorithm enhanced with subspace acceleration to solve the bound-constrained dual optimization problem. Such gradient projection methods are well-known, but are typically employed to solve the primal problem when only … Read more

Improved worst-case evaluation complexity for potentially rank-deficient nonlinear least-Euclidean-norm problems using higher-order regularized models

We present an improved evaluation complexity bound for nonlinear least squares problems using higher order regularization methods. CitationTechnical Report NA 15-17, Numerical Analysis Group, University of Oxford, 2015ArticleDownload View PDF

Evaluation complexity bounds for smooth constrained nonlinear optimization using scaled KKT conditions and high-order models

Evaluation complexity for convexly constrained optimization is considered and it is shown first that the complexity bound of $O(\epsilon^{-3/2})$ proved by Cartis, Gould and Toint (IMAJNA 32(4) 2012, pp.1662-1695) for computing an $\epsilon$-approximate first-order critical point can be obtained under significantly weaker assumptions. Moreover, the result is generalized to the case where high-order derivatives are … Read more

A Filter SQP Method: Local Convergence and Numerical Results

The work by Gould, Loh, and Robinson [“A filter method with unified step computation for nonlinear optimization”, SIAM J. Optim., 24 (2014), pp. 175–209] established global convergence of a new filter line search method for finding local first-order solutions to nonlinear and nonconvex constrained optimization problems. A key contribution of that work was that the … Read more

Corrigendum: On the complexity of finding first-order critical points in constrained nonlinear optimization

In a recent paper (Cartis, Gould and Toint, Math. Prog. A 144(1-2) 93–106, 2014), the evaluation complexity of an algorithm to find an approximate first-order critical point for the general smooth constrained optimization problem was examined. Unfortunately, the proof of Lemma 3.5 in that paper uses a result from an earlier paper in an incorrect … Read more

Worst-case evaluation complexity of regularization methods for smooth unconstrained optimization using Hölder continuous gradients

The worst-case behaviour of a general class of regularization algorithms is considered in the case where only objective function values and associated gradient vectors are evaluated. Upper bounds are derived on the number of such evaluations that are needed for the algorithm to produce an approximate first-order critical point whose accuracy is within a user-defined … Read more

Adaptive Augmented Lagrangian Methods: Algorithms and Practical Numerical Experience

In this paper, we consider augmented Lagrangian (AL) algorithms for solving large-scale nonlinear optimization problems that execute adaptive strategies for updating the penalty parameter. Our work is motivated by the recently proposed adaptive AL trust region method by Curtis et al. [An adaptive augmented Lagrangian method for large-scale constrained optimization, Math. Program. 152 (2015), pp.201–245.]. … Read more