Transformations enabling to construct limited-memory Broyden class methods

The Broyden class of quasi-Newton updates for inverse Hessian approximation are transformed to the formal BFGS update, which makes possible to generalize the well-known Nocedal method based on the Strang recurrences to the scaled limited-memory Broyden family, using the same number of stored vectors as for the limited-memory BFGS method. Two variants are given, the … Read more

An adaptive cubic regularisation algorithm for nonconvex optimization with convex constraints and its function-evaluation complexity

The adaptive cubic overestimation algorithm described in Cartis, Gould and Toint (2007) is adapted to the problem of minimizing a nonlinear, possibly nonconvex, smooth objective function over a convex domain. Convergence to first-order critical points is shown under standard assumptions, but without any Lipschitz continuity requirement on the objective’s Hessian. A worst-case complexity analysis in … Read more

On solving trust-region and other regularised subproblems in optimization

The solution of trust-region and regularisation subproblems which arise in unconstrained optimization is considered. Building on the pioneering work of Gay, More’ and Sorensen, methods which obtain the solution of a sequence of parametrized linear systems by factorization are used. Enhancements using high-order polynomial approximation and inverse iteration ensure that the resulting method is both … Read more

An Active Set Strategy for Solving Optimization Problems with up to 200,000,000 Nonlinear Constraints

We propose a numerical algorithm for solving smooth nonlinear programming problems with a large number of constraints, but a moderate number of variables. The active set method proceeds from a given bound mw for the maximum number of expected violated constraints, where mw is a user-provided parameter less than the total number of constraints. A … Read more

The Constant Rank Condition and Second Order Constraint Qualifications

The Constant Rank condition for feasible points of nonlinear programming problems was defined by Janin in Ref. 1. In that paper the author proved that the condition was a first order constraint qualification. In this work we prove that the Janin Constant Rank condition is, in addition, a second order constraint qualification. We also define … Read more

An Interior-Point Algorithm for Large-Scale Nonlinear Optimization with Inexact Step Computations

We present a line-search algorithm for large-scale continuous optimization. The algorithm is matrix-free in that it does not require the factorization of derivative matrices. Instead, it uses iterative linear system solvers. Inexact step computations are supported in order to save computational expense during each iteration. The algorithm is an interior-point approach derived from an inexact … Read more

The Integer Approximation Error in Mixed-Integer Optimal Control

We extend recent work on nonlinear optimal control problems with integer restrictions on some of the control functions (mixed-integer optimal control problems, MIOCP) in two ways. We improve a theorem that states that the solution of a relaxed and convexified problem can be approximated with arbitrary precision by a solution fulfilling the integer requirements. Unlike … Read more

A New Relaxation Scheme for Mathematical Programs with Equilibrium Constraints

We present a new relaxation scheme for mathematical programs with equilibrium constraints (MPEC), where the complementarity constraints are replaced by a reformulation that is exact for the complementarity conditions corresponding to sufficiently non-degenerate complementarity components and relaxes only the remaining complementarity conditions. A positive parameter determines to what extent the complementarity conditions are relaxed. The … Read more

Self-correcting geometry in model-based algorithms for derivative-free unconstrained optimization

Several efficient methods for derivative-free optimization (DFO) are based on the construction and maintenance of an interpolation model for the objective function. Most of these algorithms use special “geometry-improving” iterations, where the geometry (poisedness) of the underlying interpolation set is made better at the cost of one or more function evaluations. We show that such … Read more

The Advanced Step NMPC Controller: Optimality, Stability and Robustness

Widespread application of dynamic optimization with fast optimization solvers leads to increased consideration of first-principles models for nonlinear model predictive control (NMPC). However, significant barriers to this optimization-based control strategy are feedback delays and consequent loss of performance and stability due to on-line computation. To overcome these barriers, recently proposed NMPC controllers based on nonlinear … Read more