For multistage stochastic programming problems with stagewise independent uncertainty, dynamic programming algorithms calculate polyhedral approximations for the value functions at each stage. The SDDP algorithm provides piecewise linear lower bounds, in the spirit of the L-shaped algorithm, and corresponding upper bounds took a longer time to appear. One strategy uses the primal dynamic programming recursion to build inner approximations of the value functions, while a second one builds lower approximations for the conjugate of the value functions. The resulting dynamic programming recursion for the conjugate value functions do not decompose over scenarios, which suggests a Lagrangian relaxation. We prove that this Lagrangian relaxation corresponds exactly to the inner upper bounds for a natural choice of multipliers.
Article
View Duality of upper bounds in stochastic dynamic programming