Probabilistic Iterative Hard Thresholding for Sparse Learning

For statistical modeling wherein the data regime is unfavorable in terms of dimensionality relative to
the sample size, finding hidden sparsity in the ground truth can be critical in formulating an accurate
statistical model. The so-called "l0 norm", which counts the number of non-zero components in a
vector, is a strong reliable mechanism of enforcing sparsity when incorporated into an optimization
problem. However, in big data settings wherein noisy estimates of the gradient must be evaluated
out of computational necessity, the literature is scant on methods that reliably converge. In this
paper we present an approach towards solving expectation objective optimization problems with
cardinality constraints. We prove convergence of the underlying stochastic process, and demonstrate
the performance on two Machine Learning problems.

Article

Download

View Probabilistic Iterative Hard Thresholding for Sparse Learning