Stability of Markovian Stochastic Programming

Multi-stage stochastic programming is notoriously hard, since solution methods suffer from the curse of dimensionality. Recently, stochastic dual dynamic programming has shown promising results for Markovian problems with many stages and a moderately large state space. In order to numerically solve these problems simple discrete representations of Markov processes are required but a convincing theoretical foundation for the generation of these approximations is still lacking. This paper aims to fill this gap and proposes a framework to analyze quantitative stability for multi-stage stochastic optimization problems with a Markovian structure. The results show how the objective values change if the underlying stochastic process is approximated by a simpler one. The resulting bound is formulated using the Fortet-Mourier distance and works for problems whose value functions are locally Lipschitz continuous in the random data. The framework is applicable for important classes of stochastic optimization problems and the results motivate approximations of general Markovian processes by discrete scenario lattices that can be used to obtain numerical solutions. We propose a computationally cheap stochastic gradient descent algorithm for building lattices and show that out-of-sample objectives as well as decisions converge to the respective quantities of the original problem as the approximation gets finer. A numerical study of a multi-period newsvendor problem provides a practical proof of concept of the proposed ideas.

Article

Download

View PDF