On fast trust region methods for quadratic models with linear constraints

Quadratic models Q_k(.) of the objective function F(.) are used by many successful iterative algorithms for minimization, where k is the iteration number. Given the vector of variables x_k, a new vector x_{k+1} may be calculated that satisfies Q_k(x_{k+1}) < Q_k(x_k), in the hope that it provides the reduction F(x_{k+1}) < F(x_k). Trust region methods ... Read more

The Lagrange method and SAO with bounds on the dual variables

We consider the general nonlinear programming problem with equality and inequality constraints when the variables x are confined to a compact set. We regard the Lagrange multipliers as dual variables lambda, those of the inequalities being nonnegative. For each lambda, we let phi(lambda) be the least value of the Lagrange function, which occurs at x=x(lambda), … Read more

On the convergence of trust region algorithms for unconstrained minimization without derivatives

We consider iterative trust region algorithms for the unconstrained minimization of an objective function F(x) of n variables, when F is differentiable but no derivatives are available, and when each model of F is a linear or quadratic polynomial. The models interpolate F at n+1 points, which defines them uniquely when they are linear polynomials. … Read more

Beyond symmetric Broyden for updating quadratic models in minimization without derivatives

Some highly successful algorithms for unconstrained minimization without derivatives construct changes to the variables by applying trust region methods to quadratic approximations to the objective function F(x), x in R^n. A quadratic model has (n+1)(n+2)/2 independent parameters, but each new model may interpolate only 2n+1 values of F, for instance. The symmetric Broyden method takes … Read more

The BOBYQA algorithm for bound constrained optimization without derivatives

BOBYQA is an iterative algorithm for finding the minimum of a function F(x) subject to lower and upper bounds on the variables, F(x) being specified by a “black box” that returns the value F(x) for any feasible x. Each iteration employs a quadratic approximation Q to F that satisfies Q(y_j) = F(y_j), j=1,2,…,m, the interpolation … Read more

On the convergence of a wide range of trust region methods for unconstrained optimization

We consider trust region methods for seeking the unconstrained minimum of an objective function F(x), x being the vector of variables, when the gradient grad F is available. The methods are iterative with a starting point x_1 being given. The new vector of variables x_(k+1) is derived from a quadratic approximation to F that interpolates … Read more

On nonlinear optimization since 1959

This view of the development of algorithms for nonlinear optimization is based on the research that has been of particular interest to the author since 1959, including several of his own contributions. After a brief survey of classical methods, which may require good starting points in order to converge successfully, the huge impact of variable … Read more

Developments of NEWUOA for unconstrained minimization without derivatives

The NEWUOA software is described briefly, with some numerical results that show good efficiency and accuracy in the unconstrained minimization without derivatives of functions of up to 320 variables. Some preliminary work on an extension of NEWUOA that allows simple bounds on the variables is also described. It suggests a variation of a technique in … Read more

A view of algorithms for optimization without derivatives

Let the least value of a function of many variables be required. If its gradient is available, then one can tell whether search directions are downhill, and first order conditions help to identify the solution. It seems in practice, however, that the vast majority of unconstrained calculations do not employ any derivatives. A view of … Read more