We give a finite state approximation scheme to countable state controlled robust/risk-averse Markov chains, where there is uncertainty in the transition probability. A convergence theorem along with the corresponding rate for this approximation is established. An approximation to the stationary optimal policy is also given. Our results show a fundamental difference between the finite state approximations for classical/riskneutral and robust/risk averse controlled Markov chains. In risk averse case, based onthe size of the uncertainty radius in transition densities, the discount rate 0≤β <1must be small enough to compensate for the radius of uncertainty, otherwise the convergence can not be guaranteed. This is not to be seen in finite state approximations for risk neutral controlled Markov chains.
Article
View Finite State Approximations for Robust Markov Decision Processes