Simplex Algorithm for Countable-state Discounted Markov Decision Processes

We consider discounted Markov Decision Processes (MDPs) with countably-infinite state spaces, finite action spaces, and unbounded rewards. Typical examples of such MDPs are inventory management and queueing control problems in which there is no specific limit on the size of inventory or queue. Existing solution methods obtain a sequence of policies that converges to optimality in value but may not improve monotonically, i.e., a policy in the sequence may be worse than preceding policies. Our proposed approach considers countably-infinite linear programming (CILP) formulations of the MDPs (a CILP is defined as a linear program (LP) with countably-infinite numbers of variables and constraints). Under standard assumptions for analyzing MDPs with countably-infinite state spaces and unbounded rewards, we extend the major theoretical extreme point and duality results to the resulting CILPs. Under an additional technical assumption which is satisfied by several applications of interest, we present a simplex-type algorithm that is implementable in the sense that each of its iterations requires only a finite amount of data and computation. We show that the algorithm finds a sequence of policies which improves monotonically and converges to optimality in value. Unlike existing simplex-type algorithms for CILPs, our proposed algorithm solves a class of CILPs in which each constraint may contain an infinite number of variables and each variable may appear in an infinite number of constraints. A numerical illustration for inventory management problems is also presented.

Citation

11/2014

Article

Download

View Simplex Algorithm for Countable-state Discounted Markov Decision Processes