Compressed Sensing Off the Grid

We consider the problem of estimating the frequency components of a mixture of s complex sinusoids from a random subset of n regularly spaced samples. Unlike previous work in compressed sensing, the frequencies are not assumed to lie on a grid, but can assume any values in the normalized frequency domain [0, 1]. We propose … Read more

Learning Circulant Sensing Kernels

In signal acquisition, Toeplitz and circulant matrices are widely used as sensing operators. They correspond to discrete convolutions and are easily or even naturally realized in various applications. For compressive sensing, recent work has used random Toeplitz and circulant sensing matrices and proved their efficiency in theory, by computer simulations, as well as through physical … Read more

A Block Coordinate Descent Method for Regularized Multi-Convex Optimization with Applications to Nonnegative Tensor Factorization and Completion

This paper considers regularized block multi-convex optimization, where the feasible set and objective function are generally non-convex but convex in each block of variables. We review some of its interesting examples and propose a generalized block coordinate descent method. (Using proximal updates, we further allow non-convexity over some blocks.) Under certain conditions, we show that … Read more

Fast and Robust Recursive Algorithms for Separable Nonnegative Matrix Factorization

In this paper, we study the nonnegative matrix factorization problem under the separability assumption (that is, there exists a cone spanned by a small subset of the columns of the input nonnegative data matrix containing all columns), which is equivalent to the hyperspectral unmixing problem under the linear mixing model and the pure-pixel assumption. We … Read more

A variable smoothing algorithm for solving convex optimization problems

In this article we propose a method for solving unconstrained optimization problems with convex and Lipschitz continuous objective functions. By making use of the Moreau envelopes of the functions occurring in the objective, we smooth the latter to a convex and differentiable function with Lipschitz continuous gradient by using both variable and constant smoothing parameters. … Read more

Matheuristics for $\PsihBcLearning

Recently, the so-called $\psi$-learning approach, the Support Vector Machine (SVM) classifier obtained with the ramp loss, has attracted attention from the computational point of view. A Mixed Integer Nonlinear Programming (MINLP) formulation has been proposed for $\psi$-learning, but solving this MINLP formulation to optimality is only possible for datasets of small size. For datasets of … Read more

Factoring nonnegative matrices with linear programs

This paper describes a new approach for computing nonnegative matrix factorizations (NMFs) with linear programming. The key idea is a data-driven model for the factorization, in which the most salient features in the data are used to express the remaining features. More precisely, given a data matrix X, the algorithm identifies a matrix C that … Read more

pcaL1: An Implementation in R of Three Methods for L1-Norm Principal Component Analysis

pcaL1 is a package for the R environment for finding principal components using methods based on the L1 norm. The principal components derived using traditional principal component analysis (PCA) can be interpreted as an optimal solution to several optimization problems involving the L2 norm. Using the L1 norm in these problems provides an alternative that … Read more

Atomic norm denoising with applications to line spectral estimation

The sub-Nyquist estimation of line spectra is a classical problem in signal processing, but currently popular subspace-based techniques have few guarantees in the presence of noise and rely on a priori knowledge about system model order. Motivated by recent work on atomic norms in inverse problems, we propose a new approach to line spectral estimation … Read more

Limited Memory Block Krylov Subspace Optimization for Computing Dominant Singular Value Decompositions

In many data-intensive applications, the use of principal component analysis (PCA) and other related techniques is ubiquitous for dimension reduction, data mining or other transformational purposes. Such transformations often require efficiently, reliably and accurately computing dominant singular value decompositions (SVDs) of large unstructured matrices. In this paper, we propose and study a subspace optimization technique … Read more