MINRES-QLP: a Krylov subspace method for indefinite or singular symmetric systems

CG, SYMMLQ, and MINRES are Krylov subspace methods for solving symmetric systems of linear equations. When these methods are applied to an incompatible system (that is, a singular symmetric least-squares problem), CG could break down and SYMMLQ’s solution could explode, while MINRES would give a least-squares solution but not necessarily the minimum-length (pseudoinverse) solution. This … Read more

An inexact interior point method for L1-regularized sparse covariance selection

Sparse covariance selection problems can be formulated as log-determinant (log-det) semidefinite programming (SDP) problems with large numbers of linear constraints. Standard primal-dual interior-point methods that are based on solving the Schur complement equation would encounter severe computational bottlenecks if they are applied to solve these SDPs. In this paper, we consider a customized inexact primal-dual … Read more

Minimizing irregular convex functions: Ulam stability for approximate minima

The main concern of this article is to study Ulam stability of the set of $\varepsilon$-approximate minima of a proper lower semicontinuous convex function bounded below on a real normed space $X$, when the objective function is subjected to small perturbations (in the sense of Attouch \& Wets). More precisely, we characterize the class all … Read more

Discriminants and Nonnegative Polynomials

For a semialgebraic set K in R^n, let P_d(K) be the cone of polynomials in R^n of degrees at most d that are nonnegative on K. This paper studies the geometry of its boundary. When K=R^n and d is even, we show that its boundary lies on the irreducible hypersurface defined by the discriminant of … Read more

Linearized augmented Lagrangian and alternating direction methods for nuclear norm minimization

The nuclear norm is widely used to induce low-rank solutions for many optimization problems with matrix variables. Recently, it has been shown that the augmented Lagrangian method (ALM) and the alternating direction method (ADM) are very efficient for many convex programming problems arising from various applications, provided that the resulting subproblems are sufficiently simple to … Read more

Efficiency of coordinate descent methods on huge-scale optimization problems

In this paper we propose new methods for solving huge-scale optimization problems. For problems of this size, even the simplest full-dimensional vector operations are very expensive. Hence, we propose to apply an optimization technique based on random partial update of decision variables. For these methods, we prove the global estimates for the rate of convergence. … Read more

^phBcnorms, log-barriers and Cramer transform in optimization

We show that the Laplace approximation of a supremum by $L^p$-norms has interesting consequences in optimization. For instance, the logarithmic barrier functions (LBF) of a primal convex problem $P$ and its dual $P^*$ appear naturally when using this simple approximation technique for the value function $g$ of $P$ or its Legendre-Fenchel conjugate $g^*$. In addition, … Read more

Stability of error bounds for convex constraint systems in Banach spaces

This paper studies stability of error bounds for convex constraint systems in Banach spaces. We show that certain known sufficient conditions for local and global error bounds actually ensure error bounds for the family of functions being in a sense small perturbations of the given one. A single inequality as well as semi-infinite constraint systems … Read more

A Fast Algorithm for Total Variation Image Reconstruction from Random Projections

Total variation (TV) regularization is popular in image restoration and reconstruction due to its ability to preserve image edges. To date, most research activities on TV models concentrate on image restoration from blurry and noisy observations, while discussions on image reconstruction from random projections are relatively fewer. In this paper, we propose, analyze, and test … Read more

Convergence to the optimal value for barrier methods combined with Hessian Riemannian gradient flows and generalized proximal algorithms

We consider the problem $\min_{x\in\R^n}\{f(x)\mid Ax=b, \ x\in\overline{C},\ g_j(x)\le0,\ j=1,\ldots,s\}$, where $b\in\R^m$, $A\in\R^{m\times n}$ is a full rank matrix, $\overline{C}$ is the closure of a nonempty, open and convex subset $C$ of $\R^n$, and $g_j(\cdot)$, $ j=1,\ldots,s$, are nonlinear convex functions. Our strategy consists firstly in to introduce a barrier-type penalty for the constraints $g_j(x)\le0$, … Read more