In distributionally robust optimization (DRO) models, sample data of the underlying exogenous uncertainty parameters are often used to construct an ambiguity set of plausible probability distributions. It is common to assume that the sample data do not contain noise. This assumption may not be fulfilled in some data-driven problems where the perceived data are potentially contaminated. Consequently it raises a question as to whether the statistical estimators of the optimal values obtained from solving the DRO models are statistically robust, that is, the differences between the laws of these estimators and their counterparts based on real data (without noise) are controllable. In this paper, we derive error bounds for the differences under the Kantorovich metric for two classes of DRO models with applications in machine learning and risk management.
View Quantitative Statistical Robustness in Distributionally Robust Optimization Models