Finding Regions of Counterfactual Explanations via Robust Optimization

Counterfactual explanations play an important role in detecting bias and improving the explainability of data-driven classification models. A counterfactual explanation (CE) is a minimal perturbed data point for which the decision of the model changes. Most of the existing methods can only provide one CE, which may not be achievable for the user. In this … Read more

The Online Shortest Path Problem: Learning Travel Times Using A Multi-Armed Bandit Framework

In the age of e-commerce, many logistic companies must operate in large road networks without accurate knowledge of travel times for their specific fleet of vehicles. Moreover, millions of dollars are spent on routing services that do not accurately capture the specific characteristics of the companies’ drivers and the types of vehicles they must use. … Read more

Data-driven Prediction of Relevant Scenarios for Robust Combinatorial Optimization

We study iterative methods for (two-stage) robust combinatorial optimization problems with discrete uncertainty. We propose a machine-learning-based heuristic to determine starting scenarios that provide strong lower bounds. To this end, we design dimension-independent features and train a Random Forest Classifier on small-dimensional instances. Experiments show that our method improves the solution process for larger instances … Read more

Strong Partitioning and a Machine Learning Approximation for Accelerating the Global Optimization of Nonconvex QCQPs

We learn optimal instance-specific heuristics for the global minimization of nonconvex quadratically-constrained quadratic programs (QCQPs). Specifically, we consider partitioning-based convex mixed-integer programming relaxations for nonconvex QCQPs and propose the novel problem of strong partitioning to optimally partition variable domains without sacrificing global optimality. Since solving this max-min strong partitioning problem exactly can be very challenging, … Read more

A mixed-integer exponential cone programming formulation for feature subset selection in logistic regression

Logistic regression is one of the widely-used classification tools to construct prediction models. For datasets with a large number of features, feature subset selection methods are considered to obtain accurate and interpretable prediction models, in which irrelevant and redundant features are removed. In this paper, we address the problem of feature subset selection in logistic … Read more

Deep learning and hyperparameter optimization for assessing one’s eligibility for a subcutaneous implantable cardioverter-defibrillator

In cardiology, it is standard for patients suffering from ventricular arrhythmias (the leading cause of sudden cardiac death) belonging to high risk populations to be treated using Subcutaneous Implantable Cardioverter-Defibrillators (S-ICDs). S-ICDs carry a risk of so-called T Wave Over Sensing (TWOS), which can lead to inappropriate shocks with an inherent health risk. For this … Read more

Machine Learning for K-adaptability in Two-stage Robust Optimization

Two-stage robust optimization problems constitute one of the hardest optimization problem classes.One of the solution approaches to this class of problems is K-adaptability. This approach simultaneously seeks the best partitioning of the uncertainty set of scenarios into K subsets, and optimizesdecisions corresponding to each of these subsets. In general case, it is solved using the … Read more

Robust Two-Stage Optimization with Covariate Data

We consider a generalization of two-stage decision problems in which the second-stage decision may be a function of a predictive signal but cannot adapt fully to the realized uncertainty. We will show how such problems can be learned from sample data by considering a family of regularized sample average formulations. Furthermore, our regularized data-driven formulations … Read more

A Machine Learning Approach to Solving Large Bilevel and Stochastic Programs: Application to Cycling Network Design

We present a novel machine learning-based approach to solving bilevel programs that involve a large number of independent followers, which as a special case include two-stage stochastic programming. We propose an optimization model that explicitly considers a sampled subset of followers and exploits a machine learning model to estimate the objective values of unsampled followers. … Read more