A Unified Approach to Mixed-Integer Optimization Problems With Logical Constraints

We propose a unified framework to address a family of classical mixed-integer optimization problems with logically constrained decision variables, including network design, facility location, unit commitment, sparse portfolio selection, binary quadratic optimization, sparse principal component analysis and sparse learning problems. These problems exhibit logical relationships between continuous and discrete variables, which are usually reformulated linearly … Read more

A Data-Driven Approach to Multi-Stage Stochastic Linear Optimization

We propose a new data-driven approach for addressing multi-stage stochastic linear optimization problems with unknown distributions. The approach consists of solving a robust optimization problem that is constructed from sample paths of the underlying stochastic process. We provide asymptotic bounds on the gap between the optimal costs of the robust optimization problem and the underlying … Read more

A Scalable Algorithm for Sparse Portfolio Selection

The sparse portfolio selection problem is one of the most famous and frequently-studied problems in the optimization and financial economics literatures. In a universe of risky assets, the goal is to construct a portfolio with maximal expected return and minimum variance, subject to an upper bound on the number of positions, linear inequalities and minimum … Read more

Learning a Mixture of Gaussians via Mixed Integer Optimization

We consider the problem of estimating the parameters of a multivariate Gaussian mixture model (GMM) given access to $n$ samples $\x_1,\x_2,\ldots ,\x_n \in\mathbb{R}^d$ that are believed to have come from a mixture of multiple subpopulations. State-of-the-art algorithms used to recover these parameters use heuristics to either maximize the log-likelihood of the sample or try to … Read more

Bootstrap Robust Prescriptive Analytics

We address the problem of prescribing an optimal decision in a framework where its cost depends on uncertain problem parameters $Y$ that need to be learned from data. Earlier work by Bertsimas and Kallus (2014) transforms classical machine learning methods that merely predict $Y$ from supervised training data $[(x_1, y_1), \dots, (x_n, y_n)]$ into prescriptive … Read more

Computation of exact bootstrap confidence intervals: complexity and deterministic algorithms

The bootstrap is a nonparametric approach for calculating quantities, such as confidence intervals, directly from data. Since calculating exact bootstrap quantities is believed to be intractable, randomized resampling algorithms are traditionally used. Motivated by the fact that the variability from randomization can lead to inaccurate outputs, we propose a deterministic approach. First, we establish several … Read more

The Trimmed Lasso: Sparsity and Robustness

Nonconvex penalty methods for sparse modeling in linear regression have been a topic of fervent interest in recent years. Herein, we study a family of nonconvex penalty functions that we call the trimmed Lasso and that offers exact control over the desired level of sparsity of estimators. We analyze its structural properties and in doing … Read more

Scalable Robust and Adaptive Inventory Routing

We consider the finite horizon inventory routing problem with uncertain demand, where a supplier must deliver a particular commodity to its customers periodically, such that even under uncertain demand the customers do not stock out, e.g. supplying residential heating oil to customers. Current techniques that solve this problem with stochastic demand, robust or adaptive optimization … Read more

Adaptive Distributionally Robust Optimization

We develop a modular and tractable framework for solving an adaptive distributionally robust linear opti- mization problem, where we minimize the worst-case expected cost over an ambiguity set of probability dis- tributions. The adaptive distrbutaionally robust optimization framework caters for dynamic decision making, where decisions can adapt to the uncertain outcomes as they unfold in … Read more

Duality in Two-stage Adaptive Linear Optimization: Faster Computation and Stronger Bounds

In this paper we derive and exploit duality in general two-stage adaptive linear optimization models. The equivalent dualized formulation we derive is again a two-stage adaptive linear optimization model. Therefore, all existing solution approaches for two-stage adaptive models can be used to solve or approximate the dual formulation. The new dualized model differs from the … Read more