Tractable approximation of hard uncertain optimization problems

In many optimization problems uncertain parameters appear in a convex way, which is problematic as common techniques can only handle concave uncertainty. In this paper, we provide a systematic way to construct conservative approximations to such problems. Specifically, we reformulate the original problem as an adjustable robust optimization problem in which the nonlinearity of the original problem is captured by the new uncertainty set. This adjustable robust optimization problem is linear both in the variables and the uncertain parameters whenever the original uncertainty set is polyhedral, which allows for the application of a multitude of techniques from adjustable robust optimization. Our approach can be applied to a wealth of constraints, including constraints that are convex quadratic, sum-of-max (with and without square), log-sum-exp, norms, and negative entropy.

Article

Download

View PDF