We propose a data-driven extension of the stochastic dual dynamic programming (SDDP) algorithm for multistage stochastic linear programs under a continuous-state, non-stationary Markov data process. Unlike traditional SDDP methods—which often assume a known probability distribution, stagewise independent data process, or uncertainty restricted to the right-hand side of constraints—our approach overcomes these limitations, making it more applicable to various real-world applications. Our scheme avoids the construction of an exponentially growing scenario tree while providing theoretical out-of-sample performance guarantees for the proposed SDDP variant. However, sparse training data may induce an optimistic bias, degrading out-of-sample performance. To address this, we incorporate distributionally robust optimization based on the modified $\chi^2$ distance and show its equivalence to the variance regularization. We validate our approach through real-world applications in finance and energy.