pcaL1: An Implementation in R of Three Methods for L1-Norm Principal Component Analysis

pcaL1 is a package for the R environment for finding principal components using methods based on the L1 norm. The principal components derived using traditional principal component analysis (PCA) can be interpreted as an optimal solution to several optimization problems involving the L2 norm. Using the L1 norm in these problems provides an alternative that … Read more

Atomic norm denoising with applications to line spectral estimation

The sub-Nyquist estimation of line spectra is a classical problem in signal processing, but currently popular subspace-based techniques have few guarantees in the presence of noise and rely on a priori knowledge about system model order. Motivated by recent work on atomic norms in inverse problems, we propose a new approach to line spectral estimation … Read more

Limited Memory Block Krylov Subspace Optimization for Computing Dominant Singular Value Decompositions

In many data-intensive applications, the use of principal component analysis (PCA) and other related techniques is ubiquitous for dimension reduction, data mining or other transformational purposes. Such transformations often require efficiently, reliably and accurately computing dominant singular value decompositions (SVDs) of large unstructured matrices. In this paper, we propose and study a subspace optimization technique … Read more

A First-Order Smoothing Technique for a Class of Large-Scale Linear Programs

We study a class of linear programming (LP) problems motivated by large-scale machine learning applications. After reformulating the LP as a convex nonsmooth problem, we apply Nesterov’s primal-dual smoothing technique. It turns out that the iteration complexity of the smoothing technique depends on a parameter $\th$ that arises because we need to bound the originally … Read more

A proximal point algorithm for sequential feature extraction applications

We propose a proximal point algorithm to solve LAROS problem, that is the problem of finding a “large approximately rank-one submatrix”. This LAROS problem is used to sequentially extract features in data. We also develop a new stopping criterion for the proximal point algorithm, which is based on the duality conditions of \eps-optimal solutions of … Read more

Manifold Identification in Dual Averaging for Regularized Stochastic Online Learning

Iterative methods that calculate their steps from approximate subgradient directions have proved to be useful for stochastic learning problems over large and streaming data sets. When the objective consists of a loss function plus a nonsmooth regularization term, the solution often lies on a low-dimensional manifold of parameter space along which the regularizer is smooth. … Read more

HOGWILD!: A Lock-Free Approach to Parallelizing Stochastic Gradient Descent

Stochastic Gradient Descent (SGD) is a popular algorithm that can achieve state-of-the-art performance on a variety of machine learning tasks. Several researchers have recently proposed schemes to parallelize SGD, but all require performance-destroying memory locking and synchronization. This work aims to show using novel theoretical analysis, algorithms, and implementation that SGD can be implemented *without … Read more

Parallel Stochastic Gradient Algorithms for Large-Scale Matrix Completion

This paper develops Jellyfish, an algorithm for solving data-processing problems with matrix-valued decision variables regularized to have low rank. Particular examples of problems solvable by Jellyfish include matrix completion problems and least-squares problems regularized by the nuclear norm or the max-norm. Jellyfish implements a projected incremental gradient method with a biased, random ordering of the … Read more

Optimal Distributed Online Prediction using Mini-Batches

Online prediction methods are typically presented as serial algorithms running on a single processor. However, in the age of web-scale prediction problems, it is increasingly common to encounter situations where a single processor cannot keep up with the high rate at which inputs arrive. In this work we present the distributed mini-batch algorithm, a method … Read more

An Alternating Direction Algorithm for Matrix Completion with Nonnegative Factors

This paper introduces a novel algorithm for the nonnegative matrix factorization and completion problem, which aims to nd nonnegative matrices X and Y from a subset of entries of a nonnegative matrix M so that XY approximates M. This problem is closely related to the two existing problems: nonnegative matrix factorization and low-rank matrix completion, … Read more