Semi-algebraic functions have small subdifferentials

We prove that the subdifferential of any semi-algebraic extended-real-valued function on $\R^n$ has $n$-dimensional graph. We discuss consequences for generic semi-algebraic optimization problems. CitationCornell University, School of Operations Research and Information Engineering, 206 Rhodes Hall Cornell University Ithaca, NY 14853. April 2010.ArticleDownload View PDF

MINRES-QLP: a Krylov subspace method for indefinite or singular symmetric systems

CG, SYMMLQ, and MINRES are Krylov subspace methods for solving symmetric systems of linear equations. When these methods are applied to an incompatible system (that is, a singular symmetric least-squares problem), CG could break down and SYMMLQ’s solution could explode, while MINRES would give a least-squares solution but not necessarily the minimum-length (pseudoinverse) solution. This … Read more

A First-Order Augmented Lagrangian Method for Compressed Sensing

We propose a First-order Augmented Lagrangian algorithm (FAL) for solving the basis pursuit problem. FAL computes a solution to this problem by inexactly solving a sequence of L1-regularized least squares sub-problems. These sub-problems are solved using an infinite memory proximal gradient algorithm wherein each update reduces to “shrinkage” or constrained “shrinkage”. We show that FAL … Read more

Necessary optimality conditions for multiobjective bilevel programs

The multiobjective bilevel program is a sequence of two optimization problems where the upper level problem is multiobjective and the constraint region of the upper level problem is determined implicitly by the solution set to the lower level problem. In the case where the Karush-Kuhn-Tucker (KKT) condition is necessary and sufficient for global optimality of … Read more

An inexact interior point method for L1-regularized sparse covariance selection

Sparse covariance selection problems can be formulated as log-determinant (log-det) semidefinite programming (SDP) problems with large numbers of linear constraints. Standard primal-dual interior-point methods that are based on solving the Schur complement equation would encounter severe computational bottlenecks if they are applied to solve these SDPs. In this paper, we consider a customized inexact primal-dual … Read more

Minimizing irregular convex functions: Ulam stability for approximate minima

The main concern of this article is to study Ulam stability of the set of $\varepsilon$-approximate minima of a proper lower semicontinuous convex function bounded below on a real normed space $X$, when the objective function is subjected to small perturbations (in the sense of Attouch \& Wets). More precisely, we characterize the class all … Read more

Generalized differentiation with positively homogeneous maps: Applications in set-valued analysis and metric regularity

We propose a new concept of generalized differentiation of set-valued maps that captures the first order information. This concept encompasses the standard notions of Frechet differentiability, strict differentiability, calmness and Lipschitz continuity in single-valued maps, and the Aubin property and Lipschitz continuity in set-valued maps. We present calculus rules, sharpen the relationship between the Aubin … Read more

Discriminants and Nonnegative Polynomials

For a semialgebraic set K in R^n, let P_d(K) be the cone of polynomials in R^n of degrees at most d that are nonnegative on K. This paper studies the geometry of its boundary. When K=R^n and d is even, we show that its boundary lies on the irreducible hypersurface defined by the discriminant of … Read more

Sparse optimization with least-squares constraints

The use of convex optimization for the recovery of sparse signals from incomplete or compressed data is now common practice. Motivated by the success of basis pursuit in recovering sparse vectors, new formulations have been proposed that take advantage of different types of sparsity. In this paper we propose an efficient algorithm for solving a … Read more

Linearized augmented Lagrangian and alternating direction methods for nuclear norm minimization

The nuclear norm is widely used to induce low-rank solutions for many optimization problems with matrix variables. Recently, it has been shown that the augmented Lagrangian method (ALM) and the alternating direction method (ADM) are very efficient for many convex programming problems arising from various applications, provided that the resulting subproblems are sufficiently simple to … Read more