Exact worst-case convergence rates of the proximal gradient method for composite convex minimization

We study the worst-case convergence rates of the proximal gradient method for minimizing the sum of a smooth strongly convex function and a non-smooth convex function whose proximal operator is available. We establish the exact worst-case convergence rates of the proximal gradient method in this setting for any step size and for different standard performance … Read more

Worst-case convergence analysis of gradient and Newton methods through semidefinite programming performance estimation

We provide new tools for worst-case performance analysis of the gradient (or steepest descent) method of Cauchy for smooth strongly convex functions, and Newton’s method for self-concordant functions. The analysis uses semidefinite programming performance estimation, as pioneered by Drori en Teboulle [Mathematical Programming, 145(1-2):451–482, 2014], and extends recent performance estimation results for the method of … Read more

On the worst-case complexity of the gradient method with exact line search for smooth strongly convex functions

We consider the gradient (or steepest) descent method with exact line search applied to a strongly convex function with Lipschitz continuous gradient. We establish the exact worst-case rate of convergence of this scheme, and show that this worst-case behavior is exhibited by a certain convex quadratic function. We also extend the result to a noisy … Read more

Exact Worst-case Performance of First-order Methods for Composite Convex Optimization

We provide a framework for computing the exact worst-case performance of any algorithm belonging to a broad class of oracle-based first-order methods for composite convex optimization, including those performing explicit, projected, proximal, conditional and inexact (sub)gradient steps. We simultaneously obtain tight worst-case guarantees and explicit instances of optimization problems on which the algorithm reaches this … Read more

Smooth Strongly Convex Interpolation and Exact Worst-case Performance of First-order Methods

We show that the exact worst-case performance of fixed-step first-order methods for unconstrained optimization of smooth (possibly strongly) convex functions can be obtained by solving convex programs. Finding the worst-case performance of a black-box first-order method is formulated as an optimization problem over a set of smooth (strongly) convex functions and initial conditions. We develop … Read more

First-order methods with inexact oracle: the strongly convex case

The goal of this paper is to study the effect of inexact first-order information on the first-order methods designed for smooth strongly convex optimization problems. We introduce the notion of (delta,L,mu)-oracle, that can be seen as an extension of the inexact (delta,L)-oracle previously introduced, taking into account strong convexity. We consider different examples of (delta,L,mu)-oracle: … Read more

Intermediate gradient methods for smooth convex problems with inexact oracle

Between the robust but slow (primal or dual) gradient methods and the fast but sensitive to errors fast gradient methods, our goal in this paper is to develop first-order methods for smooth convex problems with intermediate speed and intermediate sensitivity to errors. We develop a general family of first-order methods, the Intermediate Gradient Method (IGM), … Read more

A Continuous Characterization of the Maximum-Edge Biclique Problem

The problem of finding large complete subgraphs in bipartite graphs (that is, bicliques) is a well-known combinatorial optimization problem referred to as the maximum-edge biclique problem (MBP), and has many applications, e.g., in web community discovery, biological data analysis and text mining. In this paper, we present a new continuous characterization for MBP. Given a … Read more

A Double Smoothing Technique for Constrained Convex Optimization Problems and Applications to Optimal Control

In this paper, we propose an efficient approach for solving a class of convex optimization problems in Hilbert spaces. Our feasible region is a (possibly infinite-dimensional) simple convex set, i.e. we assume that projections on this set are computationally easy to compute. The problem we consider is the minimization of a convex function over this … Read more

First-order Methods of Smooth Convex Optimization with Inexact Oracle

We introduce the notion of inexact first-order oracle and analyze the behaviour of several first-order methods of smooth convex optimization used with such an oracle. This notion of inexact oracle naturally appears in the context of smoothing techniques, Moreau-Yosida regularization, Augmented Lagrangians and many other situations. We derive complexity estimates for primal, dual and fast … Read more