The high-order block RIP for non-convex block-sparse compressed sensing

This paper concentrates on the recovery of block-sparse signals, which is not only sparse but also nonzero elements are arrayed into some blocks (clusters) rather than being arbitrary distributed all over the vector, from linear measurements. We establish high-order sufficient conditions based on block RIP to ensure the exact recovery of every block $s$-sparse signal … Read more

Safe screening rules for L0-Regression

We give safe screening rules to eliminate variables from regression with L0 regularization or cardinality constraint. These rules are based on guarantees that a feature may or may not be selected in an optimal solution. The screening rules can be computed from a convex relaxation solution in linear time, without solving the L0 optimization problem. … Read more

Learning Optimal Classification Trees: Strong Max-Flow Formulations

We consider the problem of learning optimal binary classification trees. Literature on the topic has burgeoned in recent years, motivated both by the empirical suboptimality of heuristic approaches and the tremendous improvements in mixed-integer programming (MIP) technology. Yet, existing approaches from the literature do not leverage the power of MIP to its full extent. Indeed, … Read more

Outlier detection in time series via mixed-integer conic quadratic optimization

We consider the problem of estimating the true values of a Wiener process given noisy observations corrupted by outliers. The problem considered is closely related to the Trimmed Least Squares estimation problem, a robust estimation procedure well-studied from a statistical standpoint but poorly understood from an optimization perspective. In this paper we show how to … Read more

Bridging Bayesian and Minimax Mean Square Error Estimation via Wasserstein Distributionally Robust Optimization

We introduce a distributionally robust minimium mean square error estimation model with a Wasserstein ambiguity set to recover an unknown signal from a noisy observation. The proposed model can be viewed as a zero-sum game between a statistician choosing an estimator—that is, a measurable function of the observation—and a fictitious adversary choosing a prior—that is, … Read more

Stochastic Discrete First-order Algorithm for Feature Subset Selection

This paper addresses the problem of selecting a significant subset of candidate features to use for multiple linear regression. Bertsimas et al. (2016) recently proposed the discrete first-order (DFO) algorithm to efficiently find near-optimal solutions to this problem. However, this algorithm is unable to escape from locally optimal solutions. To resolve this, we propose a … Read more

On Polyhedral and Second-Order-Cone Decompositions of Semidefinite Optimization Problems

We study a cutting-plane method for semidefinite optimization problems (SDOs), and supply a proof of the method’s convergence, under a boundedness assumption. By relating the method’s rate of convergence to an initial outer approximation’s diameter, we argue that the method performs well when initialized with a second-order-cone approximation, instead of a linear approximation. We invoke … Read more

Exact solution of the donor-limited nearest neighbor hot deck imputation problem

Data quality in population surveys suffers from missing responses. We use combinatorial optimization to create a complete and coherent data set. The methods are based on the widespread nearest neighbor hot deck imputation method that replaces the missing values with observed values from a close unit, the so-called donor. As a repeated use of donors … Read more

Computing Estimators of Dantzig Selector type via Column and Constraint Generation

We consider a class of linear-programming based estimators in reconstructing a sparse signal from linear measurements. Specific formulations of the reconstruction problem considered here include Dantzig selector, basis pursuit (for the case in which the measurements contain no errors), and the fused Dantzig selector (for the case in which the underlying signal is piecewise constant). … Read more

A mixed-integer optimization approach to an exhaustive cross-validated model selection for regression

We consider a linear regression model for which we assume that many of the observed regressors are irrelevant for the prediction. To avoid overfitting, we conduct a variable selection and only include the true predictors for the least square fitting. The best subset selection gained much interest in recent years for addressing this objective. For … Read more