Object-Parallel Infrastructure for Implementing First-Order Methods, with an Example Application to LASSO

We describe the design of a C++ vector-manipulation substrate that allows first-order optimization algorithms to be expressed in a concise and readable manner, yet still achieve high performance in parallel computing environments. We use standard object-oriented techniques of encapsulation and operator overloading, combined with a novel “symbolic temporaries” delayed-evaluation system that greatly reduces the overhead … Read more

Fast Bundle-Level Type Methods for unconstrained and ball-constrained convex optimization

It has been shown in \cite{Lan13-1} that the accelerated prox-level (APL) method and its variant, the uniform smoothing level (USL) method, have optimal iteration complexity for solving black-box and structured convex programming problems without requiring the input of any smoothness information. However, these algorithms require the assumption on the boundedness of the feasible set and … Read more

On the Information-Adaptive Variants of the ADMM: an Iteration Complexity Perspective

Designing algorithms for an optimization model often amounts to maintaining a balance between the degree of information to request from the model on the one hand, and the computational speed to expect on the other hand. Naturally, the more information is available, the faster one can expect the algorithm to converge. The popular algorithm of … Read more

Efficient First-Order Methods for Linear Programming and Semidefinite Programming

We present a simple transformation of any linear program or semidefinite program into an equivalent convex optimization problem whose only constraints are linear equations. The objective function is defined on the whole space, making virtually all subgradient methods be immediately applicable. We observe, moreover, that the objective function is naturally “smoothed,” thereby allowing most first-order … Read more

On the ergodic convergence rates of a first-order primal-dual algorithm

We revisit the proofs of convergence for a first order primal-dual algorithm for convex optimization which we have studied a few years ago. In particular, we prove rates of convergence for a more general version, with simpler proofs and more complete results. Article Download View On the ergodic convergence rates of a first-order primal-dual algorithm

Intermediate gradient methods for smooth convex problems with inexact oracle

Between the robust but slow (primal or dual) gradient methods and the fast but sensitive to errors fast gradient methods, our goal in this paper is to develop first-order methods for smooth convex problems with intermediate speed and intermediate sensitivity to errors. We develop a general family of first-order methods, the Intermediate Gradient Method (IGM), … Read more

First-order methods with inexact oracle: the strongly convex case

The goal of this paper is to study the effect of inexact first-order information on the first-order methods designed for smooth strongly convex optimization problems. We introduce the notion of (delta,L,mu)-oracle, that can be seen as an extension of the inexact (delta,L)-oracle previously introduced, taking into account strong convexity. We consider different examples of (delta,L,mu)-oracle: … Read more

Mini-batch Stochastic Approximation Methods for Nonconvex Stochastic Composite Optimization

This paper considers a class of constrained stochastic composite optimization problems whose objective function is given by the summation of a differentiable (possibly nonconvex) component, together with a certain non-differentiable (but convex) component. In order to solve these problems, we propose a randomized stochastic projected gradient (RSPG) algorithm, in which proper mini-batch of samples are … Read more

On Lower Complexity Bounds for Large-Scale Smooth Convex Optimization

In this note we present tight lower bounds on the information-based complexity of large-scale smooth convex minimization problems. We demonstrate, in particular, that the k-step Conditional Gradient (a.k.a. Frank-Wolfe) algorithm as applied to minimizing smooth convex functions over the n-dimensional box with n ≥ k is optimal, up to an O(ln n)-factor, in terms of … Read more

An Augmented Lagrangian Method for Conic Convex Programming

We propose a new first-order augmented Lagrangian algorithm ALCC for solving convex conic programs of the form min{rho(x)+gamma(x): Ax-b in K, x in chi}, where rho and gamma are closed convex functions, and gamma has a Lipschitz continuous gradient, A is mxn real matrix, K is a closed convex cone, and chi is a “simple” … Read more