Accelerating Proximal Gradient Descent via Silver Stepsizes

Surprisingly, recent work has shown that gradient descent can be accelerated without using momentum–just by judiciously choosing stepsizes. An open question raised by several papers is whether this phenomenon of stepsize-based acceleration holds more generally for constrained and/or composite convex optimization via projected and/or proximal versions of gradient descent. We answer this in the affirmative … Read more

Acceleration by Random Stepsizes: Hedging, Equalization, and the Arcsine Stepsize Schedule

We show that for separable convex optimization, random stepsizes fully accelerate Gradient Descent. Specifically, using inverse stepsizes i.i.d. from the Arcsine distribution improves the iteration complexity from $O(k)$ to $O(k^{1/2})$, where $k$ is the condition number. No momentum or other algorithmic modifications are required. This result is incomparable to the (deterministic) Silver Stepsize Schedule which … Read more