Sample average approximation-based stochastic dynamic programming (SDP) and model predictive control (MPC) are two different methods for approaching multistage stochastic optimization. In this paper we investigate the conditions under which SDP may be outperformed by MPC. We show that, depending on the presence of concavity or convexity, MPC can be interpreted as solving a mean-constrained distributionally-ambiguous version of the problem that is solved by SDP. This furnishes performance guarantees when the true mean is known and provides intuition for why MPC performs better in some applications and worse in others. We then study a multistage stochastic revenue optimization problem that is representative of the type for which MPC may be the better choice. We find that this may indeed be the case when the probability distribution of the underlying random variable is skewed or has enough weight in the right-hand tail, and support this with a number of examples.
Article
View On the Out-of-Sample Performance of Stochastic Dynamic Programming and Model Predictive Control