This paper deals with gradient methods for minimizing n-dimensional strictly convex quadratic functions. Two new adaptive stepsize selection rules are presented and some key properties are proved. Practical insights on the effectiveness of the proposed techniques are given by a numerical comparison with the Barzilai-Borwein (BB) method, the cyclic/adaptive BB methods and two recent monotone gradient methods.
Citation
Journal of Industrial and Management Optimization (2007), to appear. (Formerly Technical Report n. 77, Department of Pure and Applied Mathematics, University of Modena and Reggio Emilia, Modena (Italy), January 2007)