Decomposition and Adaptive Sampling for Data-Driven Inverse Linear Optimization

This work addresses inverse linear optimization where the goal is to infer the unknown cost vector of a linear program. Specifically, we consider the data-driven setting in which the available data are noisy observations of optimal solutions that correspond to different instances of the linear program. We introduce a new formulation of the problem that, … Read more

Optimal Learning for Structured Bandits

We study structured multi-armed bandits, which is the problem of online decision-making under uncertainty in the presence of structural information. In this problem, the decision-maker needs to discover the best course of action despite observing only uncertain rewards over time. The decision- maker is aware of certain structural information regarding the reward distributions and would … Read more

An Online-Learning Approach to Inverse Optimization

In this paper, we demonstrate how to learn the objective function of a decision-maker while only observing the problem input data and the decision-maker’s corresponding decisions over multiple rounds. Our approach is based on online learning and works for linear objectives over arbitrary feasible sets for which we have a linear optimization oracle. As such, … Read more

Online Learning for Strong Branching Approximation in Branch-and-Bound

We present an online learning approach to variable branching in branch-and-bound for mixed-integer linear problems. Our approach consists in learning strong branching scores in an online fashion and in using them to take branching decisions. More specifically, numerical scores are used to rank the branching candidates. If, for a given variable, the learned approximation is … Read more

Hedge Algorithm and Subgradient Methods

We show that the Hedge Algorithm, a method that is widely used in Machine Learning, can be interpreted as a particular subgradient algorithm for minimizing a well-chosen convex function, namely as a Mirror Descent Scheme. Using this reformulation, we establish three modificitations and extensions of the Hedge Algorithm that are better or at least as … Read more

General algorithmic frameworks for online problems

We study general algorithmic frameworks for online learning tasks. These include binary classification, regression, multiclass problems and cost-sensitive multiclass classification. The theorems that we present give loss bounds on the behavior of our algorithms that depend on general conditions on the iterative step sizes. Citation International Journal of Pure and Applied Mathematics, Vol. 46 (2008), … Read more