In numerical optimization, line-search and trust-region methods are two important classes of descent schemes, with well-understood global convergence properties. Here we consider “accelerated” versions of these methods, where the conventional iterate is allowed to be replaced by any point that produces at least as much decrease in the cost function as a fixed fraction of the decrease produced by the conventional iterate. A detailed convergence analysis reveals that global convergence properties of line-search and trust-region methods still hold when the methods are accelerated. The analysis is performed in the general context of optimization on manifolds, of which optimization in R^n is a particular case. This general convergence analysis sheds a new light on the behavior of several existing algorithms.
Citation
http://dx.doi.org/10.1137/08072019X : SIAM Journal on Numerical Analysis, Vol. 47, No. 2, pp. 997-1018, 2009
Article