Rank-Sparsity Incoherence for Matrix Decomposition

Suppose we are given a matrix that is formed by adding an unknown sparse matrix to an unknown low-rank matrix. Our goal is to decompose the given matrix into its sparse and low-rank components. Such a problem arises in a number of applications in model and system identification, and is NP-hard in general. In this … Read more

The L1-Norm Best-Fit Hyperplane Problem

We formalize an algorithm for solving the L1-norm best-fit hyperplane problem derived using first principles and geometric insights about L1 projection and L1 regression. The procedure follows from a new proof of global optimality and relies on the solution of a small number of linear programs. The procedure is implemented for validation and testing. This … Read more

Convergence Rate of Stochastic Gradient Search in the Case of Multiple and Non-Isolated Minima

The convergence rate of stochastic gradient search is analyzed in this paper. Using arguments based on differential geometry and Lojasiewicz inequalities, tight bounds on the convergence rate of general stochastic gradient algorithms are derived. As opposed to the existing results, the results presented in this paper allow the objective function to have multiple, non-isolated minima, … Read more

Compressed Sensing with Quantized Measurements

We consider the problem of estimating a sparse signal from a set of quantized, Gaussian noise corrupted measurements, where each measurement corresponds to an interval of values. We give two methods for (approximately) solving this problem, each based on minimizing a differentiable convex function plus an l1 regularization term. Using a first order method developed … Read more

Verifiable conditions of $\ell_1hBcrecovery for sparse signals with sign restrictions

We propose necessary and sufficient conditions for a sensing matrix to be “$s$-semigood” — to allow for exact $\ell_1$-recovery of sparse signals with at most $s$ nonzero entries under sign restrictions on part of the entries. We express error bounds for imperfect $\ell_1$-recovery in terms of the characteristics underlying these conditions. These characteristics, although difficult … Read more

Classification with Guaranteed Probability of Error

We introduce a general-purpose learning machine that we call the “Guaranteed Error Machine”, or GEM, and two learning algorithms, a “real GEM algorithm” and an “ideal GEM algorithm”. The real GEM algorithm is for use in real applications, while the ideal GEM algorithm is introduced as a theoretical tool; however, these two algorithms have identical … Read more

Simultaneously solving seven optimization problems in relative scale

In this paper we develop and analyze an efficient algorithm which solves seven related optimization problems simultaneously, in relative scale. Each iteration of our method is very cheap, with main work spent on matrix-vector multiplication. We prove that if a certain sequence generated by the algorithm remains bounded, then the method must terminate in $O(1/\delta)$ … Read more

Adaptive First-Order Methods for General Sparse Inverse Covariance Selection

In this paper, we consider estimating sparse inverse covariance of a Gaussian graphical model whose conditional independence is assumed to be partially known. Similarly as in [5], we formulate it as an $l_1$-norm penalized maximum likelihood estimation problem. Further, we propose an algorithm framework, and develop two first-order methods, that is, adaptive spectral projected gradient … Read more

On Verifiable Sufficient Conditions for Sparse Signal Recovery via L1 Minimization

We propose novel necessary and sufficient conditions for a sensing matrix to be “s-good” — to allow for exact L1-recovery of sparse signals with s nonzero entries when no measurement noise is present. Then we express the error bounds for imperfect L1-recovery (nonzero measurement noise, nearly s-sparse signal, near-optimal solution of the optimization problem yielding … Read more

Gradient based method for cone programming with application to large-scale compressed sensing

In this paper, we study a gradient based method for general cone programming (CP) problems. In particular, we first consider four natural primal-dual convex smooth minimization reformulations for them, and then discuss a variant of Nesterov’s smooth (VNS) method recently proposed by Tseng [30] for solving these reformulations. The associated worst-case major arithmetic operations costs … Read more