We consider trust region methods for seeking the unconstrained minimum of an objective function F(x), x being the vector of variables, when the gradient grad F is available. The methods are iterative with a starting point x_1 being given. The new vector of variables x_(k+1) is derived from a quadratic approximation to F that interpolates F and its gradient at x_k, where k is the iteration number. The second derivative matrix of the quadratic approximation, B_k say, can be indefinite, becuase the approximation is employed only if the vector of variables x is within distance Delta_k of x_k, where Delta_k is a trust region radius that is adjusted automatically. Thus the approximation is useful if grad F at x_k is sufficiently large and if B_k and Delta_k are sufficiently small. It is proved under mild assumptions that the norm of the gradient of F at x_(k+1) becomes less than any given positive constant after a finite number of iterations, and then it is usual to end the calculation. The assumptions include a Lipschitz condition on grad F and also F has to be bounded below. The termination property is established in a single theorem that applies to a wide range of trust region methods that force the sequence F(x_k), k=1,2,3,..., to decrease monotonically. Any choice of each symmetric matrix B_k is allowed, provided that the norm of this matrix is bounded above by a constant multiple of k.
Citation
Report DAMTP 2008/NA06, University of Cambridge, UK. May/2008 (to be published in the IMA Journal of Numerical Analysis).
Article
View On the convergence of a wide range of trust region methods for unconstrained optimization