This paper presents an algorithm for approximately minimizing a convex function in simple, not necessarily bounded convex domains, assuming only that function values and subgradients are available. No global information about the objective function is needed apart from a strong convexity parameter (which can be put to zero if only convexity is known). The worst case number of iterations needed to achieve a given accuracy is independent of the dimension (which may be infinite) and - apart from a constant factor - best possible under a variety of smoothness assumptions on the objective function.

## Article

View OSGA: A fast subgradient algorithm with optimal complexity