Compressed Sensing: A Discrete Optimization Approach

We study the Compressed Sensing (CS) problem, which is the problem of finding the most sparse vector that satisfies a set of linear measurements up to some numerical tolerance. CS is a central problem in Statistics, Operations Research and Machine Learning which arises in applications such as signal processing, data compression and image reconstruction. We … Read more

First-Order Methods for Nonsmooth Nonconvex Functional Constrained Optimization with or without Slater Points

Constrained optimization problems where both the objective and constraints may be nonsmooth and nonconvex arise across many learning and data science settings. In this paper, we show a simple first-order method finds a feasible, ϵ-stationary point at a convergence rate of O(ϵ−4) without relying on compactness or Constraint Qualification (CQ). When CQ holds, this convergence is measured by … Read more

An SDP Relaxation for the Sparse Integer Least Square Problem

In this paper, we study the polynomial approximability or solvability of sparse integer least square problem (SILS), which is the NP-hard variant of the least square problem, where we only consider sparse {0, ±1}-vectors. We propose an l1-based SDP relaxation to SILS, and introduce a randomized algorithm for SILS based on the SDP relaxation. In … Read more

Sparse multi-term disjunctive cuts for the epigraph of a function of binary variables

We propose a new method for separating valid inequalities for the epigraph of a function of binary variables. The proposed inequalities are disjunctive cuts defined by disjunctive terms obtained by enumerating a subset $I$ of the binary variables. We show that by restricting the support of the cut to the same set of variables $I$, … Read more

A Graph-based Decomposition Method for Convex Quadratic Optimization with Indicators

In this paper, we consider convex quadratic optimization problems with indicator variables when the matrix Q defining the quadratic term in the objective is sparse. We use a graphical representation of the support of Q, and show that if this graph is a path, then we can solve the associated problem in polynomial time. This … Read more

Sparse Plus Low Rank Matrix Decomposition: A Discrete Optimization Approach

We study the Sparse Plus Low-Rank decomposition problem (SLR), which is the problem of decomposing a corrupted data matrix into a sparse matrix of perturbations plus a low-rank matrix containing the ground truth. SLR is a fundamental problem in Operations Research and Machine Learning which arises in various applications, including data compression, latent semantic indexing, … Read more

Cardinality Minimization, Constraints, and Regularization: A Survey

We survey optimization problems that involve the cardinality of variable vectors in constraints or the objective function. We provide a unified viewpoint on the general problem classes and models, and give concrete examples from diverse application fields such as signal and image processing, portfolio selection, or machine learning. The paper discusses general-purpose modeling techniques and … Read more

Minimization of L1 over L2 for sparse signal recovery with convergence guarantee

The ratio of the $L_1$ and $L_2$ norms, denoted by $L_1/L_2$, becomes attractive due to its scale-invariant property when approximating the $L_0$ norm to promote sparsity. In this paper, we incorporate the $L_1/L_2$ formalism into an unconstrained model in order to deal with both noiseless and noisy observations. To design an efficient algorithm, we derive … Read more

A Subspace Acceleration Method for Minimization Involving a Group Sparsity-Inducing Regularizer

We consider the problem of minimizing an objective function that is the sum of a convex function and a group sparsity-inducing regularizer. Problems that integrate such regularizers arise in modern machine learning applications, often for the purpose of obtaining models that are easier to interpret and that have higher predictive accuracy. We present a new … Read more

Sparse Regression at Scale: Branch-and-Bound rooted in First-Order Optimization

We consider the least squares regression problem, penalized with a combination of the L0 and L2 norms (a.k.a. L0 L2 regularization). Recent work presents strong evidence that the resulting L0-based estimators can outperform popular sparse learning methods, under many important high-dimensional settings. However, exact computation of L0-based estimators remains a major challenge. Indeed, state-of-the-art mixed … Read more