Stochastic first order methods in smooth convex optimization.

In this paper, we are interested in the development of efficient first-order methods for convex optimization problems in the simultaneous presence of smoothness of the objective function and stochasticity in the first-order information. First, we consider the Stochastic Primal Gradient method, which is nothing else but the Mirror Descent SA method applied to a smooth function and we develop new practical and efficient stepsizes policies. Based on the machinery of estimates sequences functions, we develop also two new methods, a Stochastic Dual Gradient Method and an accelerated Stochastic Fast Gradient Method. Convergence rates on average, probabilities of large deviations and accuracy certificates are studied. All of these methods are designed in order to decrease the effect of the stochastic noise at an unimprovable rate and to be easily implementable in practice (the practical efficiency of our method is confirmed by numerical experiments). Furthermore, the biased case, when the oracle is not only stochastic but also affected by a bias is considered for the first time in the literature.

Citation

CORE Discusssion Paper 2011/70, Université catholique de Louvain, Belgium

Article

Download

View PDF