Safe Feature Elimination in Sparse Supervised Learning

We investigate fast methods that allow to quickly eliminate variables (features) in supervised learning problems involving a convex loss function and a l1 -norm penalty, leading to a potentially substantial reduction in the number of variables prior to running the supervised learning algorithm. The methods are not heuristic: they only eliminate features that are guaranteed … Read more

On the Quality of a Semidefinite Programming Bound for Sparse Principal Component Analysis

We examine the problem of approximating a positive, semidefinite matrix $\Sigma$ by a dyad $xx^T$, with a penalty on the cardinality of the vector $x$. This problem arises in sparse principal component analysis, where a decomposition of $\Sigma$ involving sparse factors is sought. We express this hard, combinatorial problem as a maximum eigenvalue problem, in … Read more

Sparse Covariance Selection via Robust Maximum Likelihood Estimation

We address a problem of covariance selection, where we seek a trade-off between a high likelihood against the number of non-zero elements in the inverse covariance matrix. We solve a maximum likelihood problem with a penalty term given by the sum of absolute values of the elements of the inverse covariance matrix, and allow for … Read more

A direct formulation for sparse PCA using semidefinite programming

We examine the problem of approximating, in the Frobenius-norm sense, a positive, semidefinite symmetric matrix by a rank-one matrix, with an upper bound on the cardinality of its eigenvector. The problem arises in the decomposition of a covariance matrix into sparse factors, and has wide applications ranging from biology to finance. We use a modification … Read more