Efficient Block-coordinate Descent Algorithms for the Group Lasso

We present two algorithms to solve the Group Lasso problem [Yuan & Lin]. First, we propose a general version of the Block Coordinate Descent (BCD) algorithm for the Group Lasso that employs an efficient approach for optimizing each subproblem. We show that it exhibits excellent performance when the groups are of moderate sizes. For large … Read more

Fast Multiple Splitting Algorithms for Convex Optimization

We present in this paper two different classes of general $K$-splitting algorithms for solving finite-dimensional convex optimization problems. Under the assumption that the function being minimized has a Lipschitz continuous gradient, we prove that the number of iterations needed by the first class of algorithms to obtain an $\epsilon$-optimal solution is $O(1/\epsilon)$. The algorithms in … Read more

Fast Alternating Linearization Methods for Minimizing the Sum of Two Convex Functions

We present in this paper first-order alternating linearization algorithms based on an alternating direction augmented Lagrangian approach for minimizing the sum of two convex functions. Our basic methods require at most $O(1/\epsilon)$ iterations to obtain an $\epsilon$-optimal solution, while our accelerated (i.e., fast) versions of them require at most $O(1/\sqrt{\epsilon})$ iterations, with little change in … Read more

Alternating Direction Augmented Lagrangian Methods for semidefinite programming

We present an alternating direction method based on an augmented Lagrangian framework for solving semidefinite programming (SDP) problems in standard form. At each iteration, the algorithm, also known as a two-splitting scheme, minimizes the dual augmented Lagrangian function sequentially with respect to the Lagrange multipliers corresponding to the linear constraints, then the dual slack variables … Read more

Convergence of fixed-point continuation algorithms for matrix rank minimization

The matrix rank minimization problem has applications in many fields such as system identification, optimal control, low-dimensional embedding, etc. As this problem is NP-hard in general, its convex relaxation, the nuclear norm minimization problem, is often solved instead. Recently, Ma, Goldfarb and Chen proposed a fixed-point continuation algorithm for solving the nuclear norm minimization problem. … Read more

Row by row methods for semidefinite programming

We present a row-by-row (RBR) method for solving semidefinite programming (SDP) problem based on solving a sequence of problems obtained by restricting the n-dimensional positive semidefinite constraint on the matrix X. By fixing any (n-1)-dimensional principal submatrix of X and using its (generalized) Schur complement, the positive semidefinite constraint is reduced to a simple second-order … Read more

A Fast Algorithm for Sparse Reconstruction based on Shrinkage, Subspace Optimization and Continuation

We propose a fast algorithm for solving the l1-regularized least squares problem for recovering sparse solutions to an undetermined system of linear equations Ax = b. The algorithm is divided into two stages that are performed repeatedly. In the first stage a first-order iterative method called “shrinkage” yields an estimate of the subset of components … Read more

Fixed point and Bregman iterative methods for matrix rank minimization

The linearly constrained matrix rank minimization problem is widely applicable in many fields such as control, signal processing and system identification. The linearly constrained nuclear norm minimization is a convex relaxation of this problem. Although it can be cast as a semidefinite programming problem, the nuclear norm minimization problem is expensive to solve when the … Read more

Interior-Point l_2 Penalty Methods for Nonlinear Programming with Strong Global Convergence Properties

We propose two line search primal-dual interior-point methods that approximately solve a equence of equality constrained barrier subproblems. To solve each subproblem, our methods apply a modified Newton method and use an $\ell_2$-exact penalty function to attain feasibility. Our methods have strong global convergence properties under standard assumptions. Specifically, if the penalty parameter remains bounded, … Read more

Second-order Cone Programming Methods for Total Variation-based Image Restoration

In this paper we present optimization algorithms for image restoration based on the total variation (TV) minimization framework of L. Rudin, S. Osher and E. Fatemi (ROF). Our approach formulates TV minimization as a second-order cone program which is then solved by interior-point algorithms that are efficient both in practice (using nested dissection and domain … Read more