A Retrospective Filter Trust Region Algorithm For Unconstrained Optimization

In this paper, we propose a retrospective filter trust region algorithm for unconstrained optimization, which is based on the framework of the retrospective trust region method and associated with the technique of the multi dimensional filter. The new algorithm gives a good estimation of trust region radius, relaxes the condition of accepting a trial step … Read more

A three-term conjugate gradient method with sufficient descent property for unconstrained optimization

Conjugate gradient methods are widely used for solving large-scale unconstrained optimization problems, because they do not need the storage of matrices. In this paper, we propose a general form of three-term conjugate gradient methods which always generate a sufficient descent direction. We give a sufficient condition for the global convergence of the proposed general method. … Read more

A globally convergent modified conjugate-gradient line-search algorithm with inertia controlling

In this paper we have addressed the problem of unboundedness in the search direction when the Hessian is indefinite or near singular. A new algorithm has been proposed which naturally handles singular Hessian matrices, and is theoretically equivalent to the trust-region approach. This is accomplished by performing explicit matrix modifications adaptively that mimic the implicit … Read more

A Combined Class of Self-Scaling and Modified Quasi-Newton Methods

Techniques for obtaining safely positive definite Hessian approximations with self-scaling and modified quasi-Newton updates are combined to obtain `better’ curvature approximations in line search methods for unconstrained optimization. It is shown that this class of methods, like the BFGS method has global and superlinear convergence for convex functions. Numerical experiments with this class, using the … Read more

Transformations enabling to construct limited-memory Broyden class methods

The Broyden class of quasi-Newton updates for inverse Hessian approximation are transformed to the formal BFGS update, which makes possible to generalize the well-known Nocedal method based on the Strang recurrences to the scaled limited-memory Broyden family, using the same number of stored vectors as for the limited-memory BFGS method. Two variants are given, the … Read more

Limited-memory projective variable metric methods for unconstrained minimization

A new family of limited-memory variable metric or quasi-Newton methods for unconstrained minimization is given. The methods are based on a positive definite inverse Hessian approximation in the form of the sum of identity matrix and two low rank matrices, obtained by the standard scaled Broyden class update. To reduce the rank of matrices, various … Read more

Self-correcting geometry in model-based algorithms for derivative-free unconstrained optimization

Several efficient methods for derivative-free optimization (DFO) are based on the construction and maintenance of an interpolation model for the objective function. Most of these algorithms use special “geometry-improving” iterations, where the geometry (poisedness) of the underlying interpolation set is made better at the cost of one or more function evaluations. We show that such … Read more

Approximating Hessians in multilevel unconstrained optimization

We consider Hessian approximation schemes for large-scale multilevel unconstrained optimization problems, which typically present a sparsity and partial separability structure. This allows iterative quasi-Newton methods to solve them despite of their size. Structured finite-difference methods and updating schemes based on the secant equation are presented and compared numerically inside the multilevel trust-region algorithm proposed by … Read more

Descent heuristics for unconstrained minimization

Semidefinite relaxations often provide excellent starting points for nonconvex problems with multiple local minimizers. This work aims to find a local minimizer within a certain neighborhood of the starting point and with a small objective value. Several approaches are motivated and compared with each other. CitationReport, Mathematisches Institut, Universitaet Duesseldorf, August 2008.ArticleDownload View PDF

A stochastic algorithm for function minimization

Focusing on what an optimization problem may comply with, the so-called convergence conditions have been proposed and sequentially a stochastic optimization algorithm named as DSZ algorithm is presented in order to deal with both unconstrained and constrained optimizations. Its principle is discussed in the theoretical model of DSZ algorithm, from which we present a practical … Read more