Hedge Algorithm and Subgradient Methods

We show that the Hedge Algorithm, a method that is widely used in Machine Learning, can be interpreted as a particular subgradient algorithm for minimizing a well-chosen convex function, namely as a Mirror Descent Scheme. Using this reformulation, we establish three modificitations and extensions of the Hedge Algorithm that are better or at least as good as the standard method with respect to worst-case guarantees. Numerical experiments show that the modified and extended methods that we suggest in this paper perform consistently better than the standard Hedge Algorithm.

Citation

IFOR Internal report, December 2009, ETH Zurich, Raemistrasse 101, CH-8092 Zurich, Switzerland.

Article

Download

View PDF