Acceleration by Random Stepsizes: Hedging, Equalization, and the Arcsine Stepsize Schedule

\(\)

We show that for separable convex optimization, random stepsizes fully
accelerate Gradient Descent. Specifically, using inverse stepsizes i.i.d. from
the Arcsine distribution improves the iteration complexity from $O(k)$ to
$O(k^{1/2})$, where $k$ is the condition number. No momentum or other
algorithmic modifications are required. This result is incomparable to the
(deterministic) Silver Stepsize Schedule which does not require separability
but only achieves partial acceleration $O(k^{\log_{1+\sqrt{2}} 2}) \approx
O(k^{0.78})$. Our starting point is a conceptual connection to potential
theory: the variational characterization for the distribution of stepsizes with
fastest convergence rate mirrors the variational characterization for the
distribution of charged particles with minimal logarithmic potential energy.
The Arcsine distribution solves both variational characterizations due to a
remarkable “equalization property” which in the physical context amounts to a
constant potential over space, and in the optimization context amounts to an
identical convergence rate over all quadratic functions. A key technical
insight is that martingale arguments extend this phenomenon to all separable
convex functions. We interpret this equalization as an extreme form of hedging:
by using this random distribution over stepsizes, Gradient Descent converges at
exactly the same rate for all functions in the function class.

Article

Download

View PDF