Information Geometry and Primal-Dual Interior-point Algorithms

In this paper, we study polynomial-time interior-point algorithms in view of information geometry. We introduce an information geometric structure for a conic linear program based on a self-concordant barrier function. Riemannian metric is defined with the Hessian of the barrier function. We introduce two connections $\nabla$ and $\nabla^*$ which roughly corresponds to the primal and … Read more

Sparse optimization with least-squares constraints

The use of convex optimization for the recovery of sparse signals from incomplete or compressed data is now common practice. Motivated by the success of basis pursuit in recovering sparse vectors, new formulations have been proposed that take advantage of different types of sparsity. In this paper we propose an efficient algorithm for solving a … Read more

Linearized augmented Lagrangian and alternating direction methods for nuclear norm minimization

The nuclear norm is widely used to induce low-rank solutions for many optimization problems with matrix variables. Recently, it has been shown that the augmented Lagrangian method (ALM) and the alternating direction method (ADM) are very efficient for many convex programming problems arising from various applications, provided that the resulting subproblems are sufficiently simple to … Read more

Efficiency of coordinate descent methods on huge-scale optimization problems

In this paper we propose new methods for solving huge-scale optimization problems. For problems of this size, even the simplest full-dimensional vector operations are very expensive. Hence, we propose to apply an optimization technique based on random partial update of decision variables. For these methods, we prove the global estimates for the rate of convergence. … Read more

Fast Multiple Splitting Algorithms for Convex Optimization

We present in this paper two different classes of general $K$-splitting algorithms for solving finite-dimensional convex optimization problems. Under the assumption that the function being minimized has a Lipschitz continuous gradient, we prove that the number of iterations needed by the first class of algorithms to obtain an $\epsilon$-optimal solution is $O(1/\epsilon)$. The algorithms in … Read more

Fast Alternating Linearization Methods for Minimizing the Sum of Two Convex Functions

We present in this paper first-order alternating linearization algorithms based on an alternating direction augmented Lagrangian approach for minimizing the sum of two convex functions. Our basic methods require at most $O(1/\epsilon)$ iterations to obtain an $\epsilon$-optimal solution, while our accelerated (i.e., fast) versions of them require at most $O(1/\sqrt{\epsilon})$ iterations, with little change in … Read more

Hedge Algorithm and Subgradient Methods

We show that the Hedge Algorithm, a method that is widely used in Machine Learning, can be interpreted as a particular subgradient algorithm for minimizing a well-chosen convex function, namely as a Mirror Descent Scheme. Using this reformulation, we establish three modificitations and extensions of the Hedge Algorithm that are better or at least as … Read more

Alternating directions based contraction method for generally separable linearly constrained convex programming problems

The classical alternating direction method (ADM) has been well studied in the context of linearly constrained convex programming problems and variational inequalities where both the involved operators and constraints are separable into two parts. In particular, recentness has witnessed a number of novel applications arising in diversified areas (e.g. Image Processing and Statistics), for which … Read more

Smoothing techniques for solving semidefinite programs with many constraints

We use smoothing techniques to solve approximately mildly structured semidefinite programs with many constraints. As smoothing techniques require a specific problem format, we introduce an alternative problem formulation that fulfills the structural assumptions. The resulting algorithm has a complexity that depends linearly both on the number of constraints and on the inverse of the accuracy. … Read more

A Simpler Approach to Matrix Completion

This paper provides the best bounds to date on the number of randomly sampled entries required to reconstruct an unknown low rank matrix. These results improve on prior work by Candes and Recht, Candes and Tao, and Keshavan, Montanari, and Oh. The reconstruction is accomplished by minimizing the nuclear norm, or sum of the singular … Read more