An extended delayed weighted gradient algorithm for solving strongly convex optimization problems

The recently developed delayed weighted gradient method (DWGM) is competitive with the well-known conjugate gradient (CG) method for the minimization of strictly convex quadratic functions. As well as the CG method, DWGM has some key optimality and orthogonality properties that justify its practical performance. The main difference with the CG method is that, instead of … Read more

A study of Liu-Storey conjugate gradient methods for vector optimization

This work presents a study of Liu-Storey (LS) nonlinear conjugate gradient (CG) methods to solve vector optimization problems. Three variants of the LS-CG method originally designed to solve single-objective problems are extended to the vector setting. The first algorithm restricts the LS conjugate parameter to be nonnegative and use a sufficiently accurate line search satisfying … Read more

An Accelerated Minimal Gradient Method with Momentum for Convex Quadratic Optimization

In this article we address the problem of minimizing a strictly convex quadratic function using a novel iterative method. The new algorithm is based on the well–known Nesterov’s accelerated gradient method. At each iteration of our scheme, the new point is computed by performing a line–search scheme using a search direction given by a linear … Read more

A family of optimal weighted conjugate-gradient-type methods for strictly convex quadratic minimization

We introduce a family of weighted conjugate-gradient-type methods, for strictly convex quadratic functions, whose parameters are determined by a minimization model based on a convex combination of the objective function and its gradient norm. This family includes the classical linear conjugate gradient method and the recently published delayed weighted gradient method as the extreme cases … Read more

Properties of the delayed weighted gradient method

The delayed weighted gradient method, recently introduced in [13], is a low-cost gradient-type method that exhibits a surprisingly and perhaps unexpected fast convergence behavior that competes favorably with the well-known conjugate gradient method for the minimization of convex quadratic functions. In this work, we establish several orthogonality properties that add understanding to the practical behavior … Read more

Preconditioning and Globalizing Conjugate Gradients in Dual Space for Quadratically Penalized Nonlinear-Least Squares Problems

When solving nonlinear least-squares problems, it is often useful to regularize the problem using a quadratic term, a practice which is especially common in applications arising in inverse calculations. A solution method derived from a trust-region Gauss-Newton algorithm is analyzed for such applications, where, contrary to the standard algorithm, the least-squares subproblem solved at each … Read more

Computational experience with modified conjugate gradient methods for unconstrained optimization

In this report, several modifications of the nonlinear conjugate gradient method are described and investigated. Theoretical properties of these modifications are proved and their practical performance is demonstrated using extensive numerical experiments. Citation Technical report No. 1038, Institute of Computer Science, Pod Vodarenskou Vezi 2, 18207 Praha 8. December 2008 Article Download View Computational experience … Read more

A subspace minimization method for the trust-region step

We consider methods for large-scale unconstrained minimization based on finding an approximate minimizer of a quadratic function subject to a two-norm trust-region constraint. The Steihaug-Toint method uses the conjugate-gradient (CG) algorithm to minimize the quadratic over a sequence of expanding subspaces until the iterates either converge to an interior point or cross the constraint boundary. … Read more