Computing Counterfactual Explanations for Linear Optimization: A New Class of Bilevel Models and a Tailored Penalty Alternating Direction Method

Explainable artificial intelligence is one of the most important trends in modern machine-learning research. The idea is to explain the outcome of a model by presenting a certain change in the input of the model so that the outcome changes significantly. In this paper, we study this question for linear optimization problems as an automated … Read more

Stochastic first-order methods with multi-extrapolated momentum for highly smooth unconstrained optimization

In this paper we consider an unconstrained stochastic optimization problem where the objective function exhibits a high order of smoothness. In particular, we propose a stochastic first-order method (SFOM) with multi-extrapolated momentum, in which multiple extrapolations are performed in each iteration, followed by a momentum step based on these extrapolations. We show that our proposed … Read more

A Generalized Voting Game for Categorical Network Choices

This paper presents a game-theoretical framework for data classification and network discovery, focusing on pairwise influences in multivariate choices. The framework consists of two complementary games in which individuals, connected through a signed weighted graph, exhibit network similarity. A voting rule captures the influence of an individual’s neighbors, categorized as attractive (friend-like) or repulsive (enemy-like), … Read more

Enhancing Top Efficiency by Minimizing Second-Best Scores: A Novel Perspective on Super Efficiency Models in DEA

In this paper, we reveal a new characterization of the super-efficiency model for Data Envelopment Analysis (DEA). In DEA, the efficiency of each decision making unit (DMU) is measured by the ratio the weighted sum of outputs divided by the weighted sum of inputs.In order to measure efficiency of a DMU, ${\rm DMU}_j$, say, in CCR … Read more

Guaranteed bounds for optimal stopping problems using kernel-based non-asymptotic uniform confidence bands

In this paper, we introduce an approach for obtaining probabilistically guaranteed upper and lower bounds on the true optimal value of stopping problems. Bounds of existing simulation-and-regression approaches, such as those based on least squares Monte Carlo and information relaxation, are stochastic in nature and therefore do not come with a finite sample guarantee. Our … Read more

Decision-focused predictions via pessimistic bilevel optimization: complexity and algorithms

Dealing with uncertainty in optimization parameters is an important and longstanding challenge. Typically, uncertain parameters are predicted accurately, and then a deterministic optimization problem is solved. However, the decisions produced by this so-called predict-then-optimize procedure can be highly sensitive to uncertain parameters. In this work, we contribute to recent efforts in producing decision-focused predictions, i.e., to … Read more

Forecasting Outside the Box: Application-Driven Optimal Pointwise Forecasts for Stochastic Optimization

The exponential growth in data availability in recent years has led to new formulations of data-driven optimization problems. One such formulation is that of stochastic optimization problems with contextual information, where the goal is to optimize the expected value of a certain function given some contextual information (also called features) that accompany the main data … Read more

Missing Value Imputation via Mathematical Optimization with Instance-and-Feature Neighborhoods

Datasets collected for analysis often contain a certain amount of incomplete instances, where some feature values are missing. Since many statistical analyses and machine learning algorithms depend on complete datasets, missing values need to be imputed in advance. Bertsimas et al. (2018) proposed a high-performance method that combines machine learning and mathematical optimization algorithms for … Read more

A Generalization Result for Convergence in Learning-to-Optimize

Convergence in learning-to-optimize is hardly studied, because conventional convergence guarantees in optimization are based on geometric arguments, which cannot be applied easily to learned algorithms. Thus, we develop a probabilistic framework that resembles deterministic optimization and allows for transferring geometric arguments into learning-to-optimize. Our main theorem is a generalization result for parametric classes of potentially … Read more

Optimism in the Face of Ambiguity Principle for Multi-Armed Bandits

Follow-The-Regularized-Leader (FTRL) algorithms often enjoy optimal regret for adversarial as well as stochastic bandit problems and allow for a streamlined analysis. However, FTRL algorithms require the solution of an optimization problem in every iteration and are thus computationally challenging. In contrast, Follow-The-Perturbed-Leader (FTPL) algorithms achieve computational efficiency by perturbing the estimates of the rewards of … Read more