We study the multi-stage stochastic unit commitment problem in which commitment and generation decisions can be made and adjusted in each time period. We formulate this problem as a Markov decision process, which is "weakly-coupled" in the sense that if the demand constraint is relaxed, the problem decomposes into a separate, low-dimensional, Markov decision process for each generator. We demonstrate how the dual approximate dynamic programming method of Barty, Carpentier, and Girardeau (RAIRO Operations Research, 44:167-183, 2010) can be adapted to obtain bounds and a policy for this problem. Previous approaches have let the Lagrange multipliers depend only on time; this can result in weak lower bounds. Other approaches have let the multipliers depend on the entire history of past random observations; although this provides a strong lower bound, its ability to handle a large number of sample paths or scenarios is limited. We demonstrate how to bridge these approaches for the stochastic unit commitment problem by letting the multipliers depend on the current observed demand. This allows a good tradeoff between strong lower bounds and good scalability with the number of scenarios. We illustrate this approach numerically on a 168-stage stochastic unit commitment problem, including minimum uptime, downtime, and ramping constraints.