A Sparsity Preserving Stochastic Gradient Method for Composite Optimization

We propose new stochastic gradient algorithms for solving convex composite optimization problems. In each iteration, our algorithms utilize a stochastic oracle of the gradient of the smooth component in the objective function. Our algorithms are based on a stochastic version of the estimate sequence technique introduced by Nesterov (Introductory Lectures on Convex Optimization: A Basic … Read more

FAST FIRST-ORDER METHODS FOR COMPOSITE CONVEX OPTIMIZATION WITH BACKTRACKING

We propose new versions of accelerated first order methods for convex composite optimization, where the prox parameter is allowed to increase from one iteration to the next. In particular we show that a full backtracking strategy can be used within the FISTA \cite{Beck-Teboulle-2009} and FALM algorithms \cite{Goldfarb-Ma-Scheinberg-2010} while preserving their worst-case iteration complexities of $O(\sqrt{L(f)/\epsilon})$. … Read more

First-order Methods of Smooth Convex Optimization with Inexact Oracle

We introduce the notion of inexact first-order oracle and analyze the behaviour of several first-order methods of smooth convex optimization used with such an oracle. This notion of inexact oracle naturally appears in the context of smoothing techniques, Moreau-Yosida regularization, Augmented Lagrangians and many other situations. We derive complexity estimates for primal, dual and fast … Read more

On the oracle complexity of first-order and derivative-free algorithms for smooth nonconvex minimization

The (optimal) function/gradient evaluations worst-case complexity analysis available for the Adaptive Regularizations algorithms with Cubics (ARC) for nonconvex smooth unconstrained optimization is extended to finite-difference versions of this algorithm, yielding complexity bounds for first-order and derivative free methods applied on the same problem class. A comparison with the results obtained for derivative-free methods by Vicente … Read more

Feasible and accurate algorithms for covering semidefinite programs

In this paper we describe an algorithm to approximately solve a class of semidefinite programs called covering semidefinite programs. This class includes many semidefinite programs that arise in the context of developing algorithms for important optimization problems such as sparsest cut, wireless multicasting, and pattern classification. We give algorithms for covering SDPs whose dependence on … Read more

A Unified Approach for Minimizing Composite Norms

We propose a first-order augmented Lagrangian algorithm (FALC) to solve the composite norm minimization problem min |sigma(F(X)-G)|_alpha + |C(X)- d|_beta subject to A(X)-b in Q; where sigma(X) denotes the vector of singular values of X, the matrix norm |sigma(X)|_alpha denotes either the Frobenius, the nuclear, or the L2-operator norm of X, the vector norm |.|_beta … Read more

A First-Order Augmented Lagrangian Method for Compressed Sensing

We propose a First-order Augmented Lagrangian algorithm (FAL) for solving the basis pursuit problem. FAL computes a solution to this problem by inexactly solving a sequence of L1-regularized least squares sub-problems. These sub-problems are solved using an infinite memory proximal gradient algorithm wherein each update reduces to “shrinkage” or constrained “shrinkage”. We show that FAL … Read more

Alternating Direction Methods for Sparse Covariance Selection

The mathematical model of the widely-used sparse covariance selection problem (SCSP) is an NP-hard combinatorial problem, whereas it can be well approximately by a convex relaxation problem whose maximum likelihood estimation is penalized by the $L_1$ norm. This convex relaxation problem, however, is still numerically challenging, especially for large-scale cases. Recently, some efficient first-order methods … Read more

A First-Order Smoothed Penalty Method for Compressed Sensing

We propose a first-order smoothed penalty algorithm (SPA) to solve the sparse recovery problem min{||x||_1 : Ax=b}. SPA is efficient as long as the matrix-vector product Ax and A^Ty can be computed efficiently; in particular, A need not be an orthogonal projection matrix. SPA converges to the target signal by solving a sequence of penalized … Read more

Iteration-complexity of first-order augmented Lagrangian methods for convex programming

This paper considers a special class of convex programming (CP) problems whose feasible regions consist of a simple compact convex set intersected with an affine manifold. We present first-order methods for this class of problems based on an inexact version of the classical augmented Lagrangian (AL) approach, where the subproblems are approximately solved by means … Read more