Nonmonotone Filter Method for Nonlinear Optimization

We propose a new nonmonotone filter method to promote global and fast local convergence for sequential quadratic programming algorithms. Our method uses two filters: a global g-filter for global convergence, and a local nonmonotone l-filter that allows us to establish fast local convergence. We show how to switch between the two filters efficiently, and we … Read more

Stopping rules and backward error analysis for bound-constrained optimization

Termination criteria for the iterative solution of bound-constrained optimization problems are examined in the light of backward error analysis. It is shown that the problem of determining a suitable perturbation on the problem’s data corresponding to the definition of the backward error is analytically solvable under mild assumptions. Moreover, a link between existing termination criteria … Read more

Algorithm 909: NOMAD: Nonlinear Optimization with the MADS algorithm

NOMAD is software that implements the MADS algorithm (Mesh Adaptive Direct Search) for black-box optimization under general nonlinear constraints. Blackbox optimization is about optimizing functions that are usually given as costly programs with no derivative information and no function values returned for a significant number of calls attempted. NOMAD is designed for such problems and … Read more

Real-Time Optimization as a Generalized Equation

We establish results for the problem of tracking a time-dependent manifold arising in online nonlinear programming by casting this as a generalized equation. We demonstrate that if points along a solution manifold are consistently strongly regular, it is possible to track the manifold approximately by solving a linear complementarity problem (LCP) at each time step. … Read more

An adaptive cubic regularisation algorithm for nonconvex optimization with convex constraints and its function-evaluation complexity

The adaptive cubic overestimation algorithm described in Cartis, Gould and Toint (2007) is adapted to the problem of minimizing a nonlinear, possibly nonconvex, smooth objective function over a convex domain. Convergence to first-order critical points is shown under standard assumptions, but without any Lipschitz continuity requirement on the objective’s Hessian. A worst-case complexity analysis in … Read more

Nonlinear Stepsize Control, Trust Regions and Regularizations for Unconstrained Optimization

A general class of algorithms for unconstrained optimization is introduced, which subsumes the classical trust-region algorithm and two of its newer variants, as well as the cubic and quadratic regularization methods. A unified theory of global convergence to first-order critical points is then described for this class. An extension to projection-based trust-region algorithms for nonlinear … Read more

SESOP-TN: Combining Sequential Subspace Optimization with Truncated Newton method

SESOP-TN is a method for very large scale unconstrained optimization of smooth functions. It combines ideas of Sequential Subspace Optimization (SESOP) [Narkiss-Zibulevsky-2005] with those of the Truncated Newton (TN) method . Replacing TN line search with subspace optimization, we allow Conjugate Gradient (CG) iterations to stay matched through consequent TN steps. This resolves the problem … Read more

Nonlinear optimization for matroid intersection and extensions

We address optimization of nonlinear functions of the form $f(Wx)$~, where $f:\R^d\rightarrow \R$ is a nonlinear function, $W$ is a $d\times n$ matrix, and feasible $x$ are in some large finite set $\calF$ of integer points in $\R^n$~. Generally, such problems are intractable, so we obtain positive algorithmic results by looking at broad natural classes … Read more

Numerical Experience with a Recursive Trust-Region Method for Multilevel Nonlinear Optimization

We consider an implementation of the recursive multilevel trust-region algorithm proposed by Gratton, Mouffe, Toint, Weber (2008) for bound-constrained nonlinear problems, and provide numerical experience on multilevel test problems. A suitable choice of the algorithm’s parameters is identified on these problems, yielding a satisfactory compromise between reliability and efficiency. The resulting default algorithm is then … Read more

ORBIT: Optimization by Radial Basis Function Interpolation in Trust-Regions

We present a new derivative-free algorithm, ORBIT, for unconstrained local optimization of computationally expensive functions. A trust-region framework using interpolating Radial Basis Function (RBF) models is employed. The RBF models considered often allow ORBIT to interpolate nonlinear functions using fewer function evaluations than the polynomial models considered by present techniques. Approximation guarantees are obtained by … Read more