The goal of this paper is to study the effect of inexact first-order information on the first-order methods designed for smooth strongly convex optimization problems. We introduce the notion of (delta,L,mu)-oracle, that can be seen as an extension of the inexact (delta,L)-oracle previously introduced, taking into account strong convexity. We consider different examples of (delta,L,mu)-oracle: strongly convex function with first-order information computed at a shifted point, strongly convex function with approximate gradient and strongly convex max-function with inexact resolution of subproblems. The core of this paper is devoted to the behavior analysis of three first-order methods, respectively the primal, the dual and the fast gradient method, when used with a (delta,L,mu)-oracle. As in the smooth convex case, we obtain that the simple gradient methods can be seen as robust but relatively slow, whereas the fast gradient method is faster but more sensitive to oracle errors. However, the strong convexity leads to much faster convergence rates (linear instead of sublinear) for every method and to a reduced sensitivity with respect to oracle errors. We also prove that the notion of (delta,L,mu)-oracle can be used in order to model exact first-order information but for functions with weaker level of smoothness and different level of convexity. This observation allows us to apply methods, originally designed for smooth strongly convex function, to weakly smooth uniformly convex functions and to derive corresponding performance guarantees.
Citation
CORE Discussion Paper 2013/16
Article
View First-order methods with inexact oracle: the strongly convex case