Conjugate gradient methods based on secant conditions that generate descent search directions for unconstrained optimization

Conjugate gradient methods have been paid attention to, because they can be directly applied to large-scale unconstrained optimization problems. In order to incorporate second order information of the objective function into conjugate gradient methods, Dai and Liao (2001) proposed a conjugate gradient method based on the secant condition. However, their method does not necessarily generate … Read more

A conjugate directions approach to improve the limited-memory BFGS method

Simple modifiations of the limited-memory BFGS method (L-BFGS) for large scale unconstrained optimization are considered, which consist in corrections (derived from the idea of conjugate directions) of the used difference vectors, utilizing information from the preceding iteration. In case of quadratic objective functions, the improvement of convergence is the best one in some sense and … Read more

A Nonlinear Conjugate Gradient Algorithm with An Optimal Property and An Improved Wolfe Line Search

In this paper, we seek the conjugate gradient direction closest to the direction of the scaled memoryless BFGS method and propose a family of conjugate gradient methods for unconstrained optimization. An improved Wolfe line search is also proposed, which can avoid a numerical drawback of the Wolfe line search and guarantee the global convergence of … Read more

Updating the regularization parameter in the adaptive cubic regularization algorithm

The adaptive cubic regularization method [Cartis, Gould, Toint, 2009-2010] has been recently proposed for solving unconstrained minimization problems. At each iteration of this method, the objective function is replaced by a cubic approximation which comprises an adaptive regularization parameter whose role is related to the local Lipschitz constant of the objective’s Hessian. We present new … Read more

On the convergence of trust region algorithms for unconstrained minimization without derivatives

We consider iterative trust region algorithms for the unconstrained minimization of an objective function F(x) of n variables, when F is differentiable but no derivatives are available, and when each model of F is a linear or quadratic polynomial. The models interpolate F at n+1 points, which defines them uniquely when they are linear polynomials. … Read more

Generalizations of the limited-memory BFGS method based on quasi-product form of update

Two families of limited-memory variable metric or quasi-Newton methods for unconstrained minimization based on quasi-product form of update are derived. As for the first family, four variants how to utilize the Strang recurrences for the Broyden class of variable metric updates are investigated; three of them use the same number of stored vectors as the … Read more

Recursive formulation of limited memory variable metric methods

In this report we propose a new recursive matrix formulation of limited memory variable metric methods. This approach can be used for an arbitrary update from the Broyden class (and some other updates) and also for the approximation of both the Hessian matrix and its inverse. The new recursive formulation requires approximately $4 m n$ … Read more

Band preconditioners for the matrix-free truncated Newton method

This report is devoted to preconditioning techniques for the matrix-free truncated Newton method. After a review of basic known pproaches, we propose ew results concerning tridiagonal and pentadiagonal preconditioners based on the standard BFGS updates and on numerical differentiation. Furthermore, we present results of extensive numerical experiments serving for the careful comparison of suitable preconditioning … Read more

On the Use of Stochastic Hessian Information in Unconstrained Optimization

This paper describes how to incorporate stochastic curvature information in a Newton- CG method and in a limited memory quasi-Newton method for large scale optimization. The motivation for this work stems from statistical learning and stochastic optimization applications in which the objective function is the sum of a very large number of loss terms, and … Read more

A Retrospective Filter Trust Region Algorithm For Unconstrained Optimization

In this paper, we propose a retrospective filter trust region algorithm for unconstrained optimization, which is based on the framework of the retrospective trust region method and associated with the technique of the multi dimensional filter. The new algorithm gives a good estimation of trust region radius, relaxes the condition of accepting a trial step … Read more