Distributionally robust chance-constrained Markov decision processes

Markov decision process (MDP) is a decision making framework where a decision maker is interested in maximizing the expected discounted value of a stream of rewards
received at future stages at various states which are visited according to a controlled Markov
chain. Many algorithms including linear programming methods are available in the literature
to compute an optimal policy when the rewards and transition probabilities are deterministic.
In this paper, we consider an MDP problem where the transition probabilities are known and
the reward vector is a random vector whose distribution is partially known. We formulate
the MDP problem using distributionally robust chance-constrained optimization framework
under various types of moments based uncertainty sets, and statistical-distance based uncertainty sets defined using phi-divergence and Wasserstein distance metric. For each type of
uncertainty set, we consider the case where a random reward vector has either a full support
or a nonnegative support. For the case of full support, we show that the distributionally robust
chance-constrained Markov decision process is equivalent to a second-order cone programming problem for the moments and phi-divergence distance based uncertainty sets, and it
is equivalent to a mixed-integer second-order cone programming problem for an Wasserstein distance based uncertainty set. For the case of nonnegative support, it is equivalent to
a copositive optimization problem and a biconvex optimization problem for the moments
based uncertainty sets and Wasserstein distance based uncertainty set, respectively. As an
application, we study a machine replacement problem and illustrate numerical experiments
on randomly generated instances.

Article

Download

View PDF