Properties of the delayed weighted gradient method

The delayed weighted gradient method, recently introduced in [13], is a low-cost gradient-type method that exhibits a surprisingly and perhaps unexpected fast convergence behavior that competes favorably with the well-known conjugate gradient method for the minimization of convex quadratic functions. In this work, we establish several orthogonality properties that add understanding to the practical behavior of the method, including its finite termination. We show that if the n X n real Hessian matrix of the quadratic function has only p < n distinct eigenvalues, then the method terminates in p iterations. We also establish an optimality condition, concerning the gradient norm, that motivates the use of this novel scheme when low precision is required for the minimization of non-quadratic functions.

Article

Download

View PDF