New analysis of linear convergence of gradient-type methods via unifying error bound conditions

The subject of linear convergence of gradient-type methods on non-strongly convex optimization has been widely studied by introducing several notions as sufficient conditions. Influential examples include the error bound property, the restricted strongly convex property, the quadratic growth property, and the Kurdyka-{\L}ojasiewicz property. In this paper, we first define a group of error bound conditions in a unified way, which covers all of the influential sufficient conditions for linear convergence analysis. Then, we establish a comprehensive relationship between the unified error bound conditions. To the best of our knowledge, this is the first study to show the very close connection between all of the listed sufficient conditions. With the unified error bound conditions, which are strictly weaker than the strongly convex condition---a traditional tool used as a linear convergence guarantee, we provide a new linear convergence analysis for a number of fundamental algorithms, including the proximal point algorithm, the dual gradient algorithms, and the cyclic block coordinate gradient descent method. Finally, by introducing a composition error bound condition, we obtain a Q-linear convergence of the gradient descent with Nesterov's acceleration on a new class of functions for which such convergence was not shown before.

Citation

arXiv:1606.00269

Article

Download

View New analysis of linear convergence of gradient-type methods via unifying error bound conditions