Robust Sensitivity Analysis of the Optimal Value of Linear Programming

We propose a framework for sensitivity analysis of linear programs (LPs) in minimiza- tion form, allowing for simultaneous perturbations in the objective coefficients and right-hand sides, where the perturbations are modeled in a compact, convex uncertainty set. This framework unifies and extends multiple approaches for LP sensitivity analysis in the literature and has close ties … Read more

Robust Dual Response Optimization

This article presents a robust optimization reformulation of the dual response problem developed in response surface methodology. The dual response approach fits separate models for the mean and the variance, and analyzes these two models in a mathematical optimization setting. We use metamodels estimated from experiments with both controllable and environmental inputs. These experiments may … Read more

Duality in Two-stage Adaptive Linear Optimization: Faster Computation and Stronger Bounds

In this paper we derive and exploit duality in general two-stage adaptive linear optimization models. The equivalent dualized formulation we derive is again a two-stage adaptive linear optimization model. Therefore, all existing solution approaches for two-stage adaptive models can be used to solve or approximate the dual formulation. The new dualized model differs from the … Read more

A Data Driven Functionally Robust Approach for Coordinating Pricing and Order Quantity Decisions with Unknown Demand Function

We consider a retailer’s problem of optimal pricing and inventory stocking decisions for a product. We assume that the price-demand curve is unknown, but data is available that loosely specifies the price-demand relationship. We propose a conceptually new framework that simultaneously considers pricing and inventory decisions without a priori fitting a function to the price-demand … Read more

When are static and adjustable robust optimization with constraint-wise uncertainty equivalent?

Adjustable Robust Optimization (ARO) yields, in general, better worst-case solutions than static Robust Optimization (RO). However, ARO is computationally more difficult than RO. In this paper, we derive conditions under which the worst-case objective values of ARO and RO problems are equal. We prove that if the uncertainty is constraint-wise and the adjustable variables lie … Read more

Centered Solutions for Uncertain Linear Equations

Our contribution is twofold. Firstly, for a system of uncertain linear equations where the uncertainties are column-wise and reside in general convex sets, we show that the intersection of the set of possible solutions and any orthant is convex. We derive a convex representation of this intersection. Secondly, to obtain centered solutions for systems of … Read more

Near-Optimal Ambiguity sets for Distributionally Robust Optimization

We propose a novel, Bayesian framework for assessing the relative strengths of data-driven ambiguity sets in distributionally robust optimization (DRO). The key idea is to measure the relative size between a candidate ambiguity set and an \emph{asymptotically optimal} set as the amount of data grows large. This asymptotically optimal set is provably the smallest convex … Read more

Min-max-min Robust Combinatorial Optimization

The idea of k-adaptability in two-stage robust optimization is to calculate a fixed number k of second-stage policies here-and-now. After the actual scenario is revealed, the best of these policies is selected. This idea leads to a min-max-min problem. In this paper, we consider the case where no first stage variables exist and propose to … Read more

A Distributionally-robust Approach for Finding Support Vector Machines

The classical SVM is an optimization problem minimizing the hinge losses of mis-classified samples with the regularization term. When the sample size is small or data has noise, it is possible that the classifier obtained with training data may not generalize well to pop- ulation, since the samples may not accurately represent the true population … Read more

Robust optimization with ambiguous stochastic constraints under mean and dispersion information

In this paper we consider ambiguous stochastic constraints under partial information consisting of means and dispersion measures of the underlying random parameters. Whereas the past literature used the variance as the dispersion measure, here we use the mean absolute deviation from the mean (MAD). This makes it possible to use the old result of Ben-Tal … Read more