CONVERGENCE RATE OF GRADIENT BASED ADAPTIVE RESTART FOR ACCELERATED GRADIENT SCHEMES

The accelerated gradient algorithm is known to have non-monotonic, periodic convergence behavior in the high momentum regime. If important function parameters like the condition number are known, the momentum can be adjusted to get linear convergence. Unfortunately these parameters are usually not accessible, so instead heuristics are used for deciding when to restart. One of the most intuitive and well known heuristics is to look at the inner product of the momentum and gradient vector and restart when this inner product is positive. In this paper we prove that the convergence rate of this adaptive restarting heuristic is linear under strong convexity.

Article

Download

View PDF