On solving trust-region and other regularised subproblems in optimization

The solution of trust-region and regularisation subproblems which arise in unconstrained optimization is considered. Building on the pioneering work of Gay, More’ and Sorensen, methods which obtain the solution of a sequence of parametrized linear systems by factorization are used. Enhancements using high-order polynomial approximation and inverse iteration ensure that the resulting method is both … Read more

An Interior-Point Algorithm for Large-Scale Nonlinear Optimization with Inexact Step Computations

We present a line-search algorithm for large-scale continuous optimization. The algorithm is matrix-free in that it does not require the factorization of derivative matrices. Instead, it uses iterative linear system solvers. Inexact step computations are supported in order to save computational expense during each iteration. The algorithm is an interior-point approach derived from an inexact … Read more

ORBIT: Optimization by Radial Basis Function Interpolation in Trust-Regions

We present a new derivative-free algorithm, ORBIT, for unconstrained local optimization of computationally expensive functions. A trust-region framework using interpolating Radial Basis Function (RBF) models is employed. The RBF models considered often allow ORBIT to interpolate nonlinear functions using fewer function evaluations than the polynomial models considered by present techniques. Approximation guarantees are obtained by … Read more

A Matrix-free Algorithm for Equality Constrained Optimization Problems with Rank-deficient Jacobians

We present a line search algorithm for large-scale constrained optimization that is robust and efficient even for problems with (nearly) rank-deficient Jacobian matrices. The method is matrix-free (i.e., it does not require explicit storage or factorizations of derivative matrices), allows for inexact step computations, and is applicable for nonconvex problems. The main components of the … Read more

Accelerated line-search and trust-region methods

In numerical optimization, line-search and trust-region methods are two important classes of descent schemes, with well-understood global convergence properties. Here we consider “accelerated” versions of these methods, where the conventional iterate is allowed to be replaced by any point that produces at least as much decrease in the cost function as a fixed fraction of … Read more

A subspace minimization method for the trust-region step

We consider methods for large-scale unconstrained minimization based on finding an approximate minimizer of a quadratic function subject to a two-norm trust-region constraint. The Steihaug-Toint method uses the conjugate-gradient (CG) algorithm to minimize the quadratic over a sequence of expanding subspaces until the iterates either converge to an interior point or cross the constraint boundary. … Read more

Formulation and solution strategies for nonparametric nonlinear stochastic programs, with an application in finance

We consider a class of stochastic programming models where the uncertainty is classically represented using parametric distributions families. The parameters are then usually estimated together with the optimal value of the problem. However, misspecification of the underlying random variables often leads to irrealistic results when little is known about their true distributions. We propose to … Read more

Iterative methods for finding a trust-region step

We consider the problem of finding an approximate minimizer of a general quadratic function subject to a two-norm constraint. The Steihaug-Toint method minimizes the quadratic over a sequence of expanding subspaces until the iterates either converge to an interior point or cross the constraint boundary. The benefit of this approach is that an approximate solution … Read more

A Retrospective Trust-Region Method for Unconstrained Optimization

We introduce a new trust-region method for unconstrained optimization where the radius update is computed using the model information at the current iterate rather than at the preceding one. The update is then performed according to how well the current model retrospectively predicts the value of the objective function at last iterate. Global convergence to … Read more