Kullback-Leibler Divergence Constrained Distributionally Robust Optimization

In this paper we study distributionally robust optimization (DRO) problems where the ambiguity set of the probability distribution is defined by the Kullback-Leibler (KL) divergence. We consider DRO problems where the ambiguity is in the objective function, which takes a form of an expectation, and show that the resulted minimax DRO problems can be formulated as a one-layer convex minimization problem. We also consider DRO problems where the ambiguity is in the constraint, which may either be an expectation constraint or a chance constraint. We show that ambiguous expectation-constrained programs may be reformulated as a one-layer convex optimization problem that takes the form of the Benstein approximation of Nemirovski and Shapiro (2006), and the ambiguous chance-constrained programs (CCPs) may be reformulated as the original CCP with an adjusted confidence level. A number of examples and special cases are also discussed in the paper to show that the reformulated problems may take simple forms that can be solved easily. The main contribution of the paper is to show that the KL divergence constrained DRO problems are often of the same complexity as their original stochastic programming problems and, thus, KL divergence appears a good candidate in modeling distribution ambiguities in mathematical programming.

Article

Download

View Kullback-Leibler Divergence Constrained Distributionally Robust Optimization