A shifted Steihaug-Toint method for computing a trust-region step.

Trust-region methods are very convenient in connection with the Newton method for unconstrained optimization. The More-Sorensen direct method and the Steihaug-Toint iterative method are most commonly used for solving trust-region subproblems. We propose a method which combines both of these approaches. Using the small-size Lanczos matrix, we apply the More-Sorensen method to a small-size trust-region … Read more

Additional properties of shifted valiable metric methods.

Some supplements to shifted variable metric or quasi-Newton methods for unconstrained minimization are given, including new limited-memory methods. Global convergence of these methods can be established for convex sufficiently smooth functions. Some encouraging numerical experience is reported. CitationReport No. V899-03, Institute of Computer Scienc, Czech Academy of Sciences, Prague, December 2003 (revised May 2004).ArticleDownload View … Read more

A Wide Interval for Efficient Self-Scaling Quasi-Newton Algorithms

This paper uses certain conditions for the global and superlinear convergence of the two-parameter self-scaling Broyden family of quasi-Newton algorithms for unconstraiend optimization to derive a wide interval for self-scaling updates. Numerical testing shows that such algorithms not only accelerate the convergence of the (unscaled) methods from the so-called convex class, but increase their chances … Read more

New Variable Metric Methods for Unconstrained Minimization Covering the Large-Scale Case

A new family of numerically efficient variable metric or quasi-Newton methods for unconstrained minimization are given, which give simple possibility of adaptation for large-scale optimization. Global convergence of the methods can be established for convex sufficiently smooth functions. Some encouraging numerical experience is reported. CitationReport V876, Institute of Computer Science, AV CR, Pod Vodarenskou Vezi … Read more

On the Convergence of Newton Iterations to Non-Stationary Points

We study conditions under which line search Newton methods for nonlinear systems of equations and optimization fail due to the presence of singular non-stationary points. These points are not solutions of the problem and are characterized by the fact that Jacobian or Hessian matrices are singular. It is shown that, for systems of nonlinear equations, … Read more

Convergence Results for Pattern Search Algorithms are Tight

Recently, general definitions of pattern search methods for both unconstrained and linearly constrained optimization were presented. It was shown under mild conditions, that there exists a subsequence of iterates converging to a stationary point. In the unconstrained case, stronger results are derived under additional assumptions. In this paper, we present three small dimensioned examples showing … Read more