1-Bit Compressive Sensing: Reformulation and RRSP-Based Sign Recovery Theory

Recently, the 1-bit compressive sensing (1-bit CS) has been studied in the field of sparse signal recovery. Since the amplitude information of sparse signals in 1-bit CS is not available, it is often the support or the sign of a signal that can be exactly recovered with a decoding method. In this paper, we first … Read more

Computing Restricted Isometry Constants via Mixed-Integer Semidefinite Programming

One of the fundamental tasks in compressed sensing is finding the sparsest solution to an underdetermined system of linear equations. It is well known that although this problem is NP-hard, under certain conditions it can be solved by using a linear program which minimizes the 1-norm. The restricted isometry property has been one of the … Read more

Sparse Recovery via Partial Regularization: Models, Theory and Algorithms

In the context of sparse recovery, it is known that most of existing regularizers such as $\ell_1$ suffer from some bias incurred by some leading entries (in magnitude) of the associated vector. To neutralize this bias, we propose a class of models with partial regularizers for recovering a sparse solution of a linear system. We … Read more

From error bounds to the complexity of first-order descent methods for convex functions

This paper shows that error bounds can be used as effective tools for deriving complexity results for first-order descent methods in convex minimization. In a first stage, this objective led us to revisit the interplay between error bounds and the Kurdyka-\L ojasiewicz (KL) inequality. One can show the equivalence between the two concepts for convex … Read more

Quantitative recovery conditions for tree-based compressed sensing

As shown in [9, 1], signals whose wavelet coefficients exhibit a rooted tree structure can be recovered — using specially-adapted compressed sensing algorithms — from just $n=\mathcal{O}(k)$ measurements, where $k$ is the sparsity of the signal. Motivated by these results, we introduce a simplified proportional-dimensional asymptotic framework which enables the quantitative evaluation of recovery guarantees … Read more

New Improved Penalty Methods for Sparse Reconstruction Based on Difference of Two Norms

In this paper, we further establish two types of DC (Difference of Convex functions) programming for $l_0$ sparse reconstruction. Our DC objective functions are specified to the difference of two norms. One is the difference of $l_1$ and $l_{\sigma_q}$ norms (DC $l_1$-$l_{\sigma_q}$ for short) where $l_{\sigma_q}$ is the $l_1$ norm of the $q$-term ($q\geq1$) best … Read more

A Preconditioner for a Primal-Dual Newton Conjugate Gradients Method for Compressed Sensing Problems

In this paper we are concerned with the solution of Compressed Sensing (CS) problems where the signals to be recovered are sparse in coherent and redundant dictionaries. We extend a primal-dual Newton Conjugate Gradients (pdNCG) method for CS problems. We provide an inexpensive and provably effective preconditioning technique for linear systems using pdNCG. Numerical results … Read more

A Second-Order Method for Compressed Sensing Problems with Coherent and Redundant Dictionaries

In this paper we are interested in the solution of Compressed Sensing (CS) problems where the signals to be recovered are sparse in coherent and redundant dictionaries. CS problems of this type are convex with non-smooth and non-separable regularization term, therefore a specialized solver is required. We propose a primal-dual Newton Conjugate Gradients (pdNCG) method. … Read more

Optimal subgradient algorithms with application to large-scale linear inverse problems

This study addresses some algorithms for solving structured unconstrained convex optimization problems using first-order information where the underlying function includes high-dimensional data. The primary aim is to develop an implementable algorithmic framework for solving problems with multi-term composite objective functions involving linear mappings using the optimal subgradient algorithm, OSGA, proposed by {\sc Neumaier} in \cite{NeuO}. … Read more

Extreme point inequalities and geometry of the rank sparsity ball

We investigate geometric features of the unit ball corresponding to the sum of the nuclear norm of a matrix and the l_1 norm of its entries — a common penalty function encouraging joint low rank and high sparsity. As a byproduct of this effort, we develop a calculus (or algebra) of faces for general convex … Read more