Accelerated gradient sliding for structured convex optimization

Our main goal in this paper is to show that one can skip gradient computations for gradient descent type methods applied to certain structured convex programming (CP) problems. To this end, we first present an accelerated gradient sliding (AGS) method for minimizing the summation of two smooth convex functions with different Lipschitz constants. We show … Read more

An optimal randomized incremental gradient method

In this paper, we consider a class of finite-sum convex optimization problems whose objective function is given by the summation of $m$ ($\ge 1$) smooth components together with some other relatively simple terms. We first introduce a deterministic primal-dual gradient (PDG) method that can achieve the optimal black-box iteration complexity for solving these composite optimization … Read more

Conditional Gradient Sliding for Convex Optimization

In this paper, we present a new conditional gradient type method for convex optimization by utilizing a linear optimization (LO) oracle to minimize a series of linear functions over the feasible set. Different from the classic conditional gradient method, the conditional gradient sliding (CGS) algorithm developed herein can skip the computation of gradients from time … Read more

Gradient Sliding for Composite Optimization

We consider in this paper a class of composite optimization problems whose objective function is given by the summation of a general smooth and nonsmooth component, together with a relatively simple nonsmooth term. We present a new class of first-order methods, namely the gradient sliding algorithms, which can skip the computation of the gradient for … Read more

Accelerating block-decomposition first-order methods for solving composite saddle-point and two-player Nash equilibrium problems

This article considers the two-player composite Nash equilibrium (CNE) problem with a separable non-smooth part, which is known to include the composite saddle-point (CSP) problem as a special case. Due to its two-block structure, this problem can be solved by any algorithm belonging to the block-decomposition hybrid proximal-extragradient (BD-HPE) framework. The framework consists of a … Read more

The Complexity of Large-scale Convex Programming under a Linear Optimization Oracle

This paper considers a general class of iterative optimization algorithms, referred to as linear-optimization-based convex programming (LCP) methods, for solving large-scale convex programming (CP) problems. The LCP methods, covering the classic conditional gradient (CG) method (a.k.a., Frank-Wolfe method) as a special case, can only solve a linear optimization subproblem at each iteration. In this paper, … Read more

A First-Order Smoothed Penalty Method for Compressed Sensing

We propose a first-order smoothed penalty algorithm (SPA) to solve the sparse recovery problem min{||x||_1 : Ax=b}. SPA is efficient as long as the matrix-vector product Ax and A^Ty can be computed efficiently; in particular, A need not be an orthogonal projection matrix. SPA converges to the target signal by solving a sequence of penalized … Read more

NESTA: A Fast and Accurate First-order Method for Sparse Recovery

Accurate signal recovery or image reconstruction from indirect and possibly under- sampled data is a topic of considerable interest; for example, the literature in the recent field of compressed sensing is already quite immense. Inspired by recent breakthroughs in the development of novel fi rst-order methods in convex optimization, most notably Nesterov’s smoothing technique, this paper … Read more