We analyze the regularization properties of two recently proposed gradient methods applied to discrete linear inverse problems. By studying their filter factors, we show that the tendency of these methods to eliminate first the eigencomponents of the gradient corresponding to large singular values allows to reconstruct the most significant part of the solution, thus yielding a useful filtering effect. This behavior is confirmed by numerical experiments performed on some image restoration problems. Furthermore, the experiments show that, for severely ill-conditioned problems and high noise levels, the two methods can be competitive with the Conjugate Gradient (CG) method, since they are slightly slower than CG, but exhibit a better semiconvergence behavior.