Proximal alternating direction-based contraction methods for separable linearly constrained convex optimization

Alternating direction method (ADM) has been well studied in the context of linearly constrained convex programming problems. Recently, because of its significant efficiency and easy implementation in novel applications, ADM is extended to the case where the number of separable parts is a finite number. The algorithmic framework of the extended method consists of two … Read more

L1 Minimization via Randomized First Order Algorithms

In this paper we propose randomized first-order algorithms for solving bilinear saddle points problems. Our developments are motivated by the need for sublinear time algorithms to solve large-scale parametric bilinear saddle point problems where cheap online assessment of solution quality is crucial. We present the theoretical efficiency estimates of our algorithms and discuss a number … Read more

Information Geometry and Primal-Dual Interior-point Algorithms

In this paper, we study polynomial-time interior-point algorithms in view of information geometry. We introduce an information geometric structure for a conic linear program based on a self-concordant barrier function. Riemannian metric is defined with the Hessian of the barrier function. We introduce two connections $\nabla$ and $\nabla^*$ which roughly corresponds to the primal and … Read more

Generic nondegeneracy in convex optimization

We show that minimizers of convex functions subject to almost all linear perturbations are nondegenerate. An analogous result holds more generally, for lower-C^2 functions. Citation Cornell University, School of Operations Research and Information Engineering, 206 Rhodes Hall Cornell University Ithaca, NY 14853. May 2010. Article Download View Generic nondegeneracy in convex optimization

On Equivalence of Semidefinite Relaxations for Quadratic Matrix Programming

In this paper, we analyze two popular semidefinite programming \SDPb relaxations for quadratically constrained quadratic programs \QCQPb with matrix variables. These are based on \emph{vector-lifting} and on \emph{matrix lifting} and are of different size and expense. We prove, under mild assumptions, that these two relaxations provide equivalent bounds. Thus, our results provide a theoretical guideline … Read more

MINRES-QLP: a Krylov subspace method for indefinite or singular symmetric systems

CG, SYMMLQ, and MINRES are Krylov subspace methods for solving symmetric systems of linear equations. When these methods are applied to an incompatible system (that is, a singular symmetric least-squares problem), CG could break down and SYMMLQ’s solution could explode, while MINRES would give a least-squares solution but not necessarily the minimum-length (pseudoinverse) solution. This … Read more

An inexact interior point method for L1-regularized sparse covariance selection

Sparse covariance selection problems can be formulated as log-determinant (log-det) semidefinite programming (SDP) problems with large numbers of linear constraints. Standard primal-dual interior-point methods that are based on solving the Schur complement equation would encounter severe computational bottlenecks if they are applied to solve these SDPs. In this paper, we consider a customized inexact primal-dual … Read more

Minimizing irregular convex functions: Ulam stability for approximate minima

The main concern of this article is to study Ulam stability of the set of $\varepsilon$-approximate minima of a proper lower semicontinuous convex function bounded below on a real normed space $X$, when the objective function is subjected to small perturbations (in the sense of Attouch \& Wets). More precisely, we characterize the class all … Read more

Discriminants and Nonnegative Polynomials

For a semialgebraic set K in R^n, let P_d(K) be the cone of polynomials in R^n of degrees at most d that are nonnegative on K. This paper studies the geometry of its boundary. When K=R^n and d is even, we show that its boundary lies on the irreducible hypersurface defined by the discriminant of … Read more

Linearized augmented Lagrangian and alternating direction methods for nuclear norm minimization

The nuclear norm is widely used to induce low-rank solutions for many optimization problems with matrix variables. Recently, it has been shown that the augmented Lagrangian method (ALM) and the alternating direction method (ADM) are very efficient for many convex programming problems arising from various applications, provided that the resulting subproblems are sufficiently simple to … Read more