Transformations enabling to construct limited-memory Broyden class methods

The Broyden class of quasi-Newton updates for inverse Hessian approximation are transformed to the formal BFGS update, which makes possible to generalize the well-known Nocedal method based on the Strang recurrences to the scaled limited-memory Broyden family, using the same number of stored vectors as for the limited-memory BFGS method. Two variants are given, the … Read more

Limited-memory projective variable metric methods for unconstrained minimization

A new family of limited-memory variable metric or quasi-Newton methods for unconstrained minimization is given. The methods are based on a positive definite inverse Hessian approximation in the form of the sum of identity matrix and two low rank matrices, obtained by the standard scaled Broyden class update. To reduce the rank of matrices, various … Read more

Computational experience with modified conjugate gradient methods for unconstrained optimization

In this report, several modifications of the nonlinear conjugate gradient method are described and investigated. Theoretical properties of these modifications are proved and their practical performance is demonstrated using extensive numerical experiments. Citation Technical report No. 1038, Institute of Computer Science, Pod Vodarenskou Vezi 2, 18207 Praha 8. December 2008 Article Download View Computational experience … Read more

On solving trust-region and other regularised subproblems in optimization

The solution of trust-region and regularisation subproblems which arise in unconstrained optimization is considered. Building on the pioneering work of Gay, More’ and Sorensen, methods which obtain the solution of a sequence of parametrized linear systems by factorization are used. Enhancements using high-order polynomial approximation and inverse iteration ensure that the resulting method is both … Read more

Self-correcting geometry in model-based algorithms for derivative-free unconstrained optimization

Several efficient methods for derivative-free optimization (DFO) are based on the construction and maintenance of an interpolation model for the objective function. Most of these algorithms use special “geometry-improving” iterations, where the geometry (poisedness) of the underlying interpolation set is made better at the cost of one or more function evaluations. We show that such … Read more

Approximating Hessians in multilevel unconstrained optimization

We consider Hessian approximation schemes for large-scale multilevel unconstrained optimization problems, which typically present a sparsity and partial separability structure. This allows iterative quasi-Newton methods to solve them despite of their size. Structured finite-difference methods and updating schemes based on the secant equation are presented and compared numerically inside the multilevel trust-region algorithm proposed by … Read more

Nonlinear Stepsize Control, Trust Regions and Regularizations for Unconstrained Optimization

A general class of algorithms for unconstrained optimization is introduced, which subsumes the classical trust-region algorithm and two of its newer variants, as well as the cubic and quadratic regularization methods. A unified theory of global convergence to first-order critical points is then described for this class. An extension to projection-based trust-region algorithms for nonlinear … Read more

Incorporating Minimum Frobenius Norm Models in Direct Search

The goal of this paper is to show that the use of minimum Frobenius norm quadratic models can improve the performance of direct-search methods. The approach taken here is to maintain the structure of directional direct-search methods, organized around a search and a poll step, and to use the set of previously evaluated points generated … Read more

Necessary conditions for local optimality in d.c. programming

Using $\eps$-subdifferential calculus for difference-of-convex (d.c.) programming, D\”ur proposed a condition sufficient for local optimality, and showed that this condition is not necessary in general. Here it is proved that whenever the convex part is strongly convex, this condition is also necessary. Strong convexity can always be ensured by changing the given d.c. decomposition slightly. … Read more

SESOP-TN: Combining Sequential Subspace Optimization with Truncated Newton method

SESOP-TN is a method for very large scale unconstrained optimization of smooth functions. It combines ideas of Sequential Subspace Optimization (SESOP) [Narkiss-Zibulevsky-2005] with those of the Truncated Newton (TN) method . Replacing TN line search with subspace optimization, we allow Conjugate Gradient (CG) iterations to stay matched through consequent TN steps. This resolves the problem … Read more