Regularized Stochastic Dual Dynamic Programming for convex nonlinear optimization problems

We define a regularized variant of the Dual Dynamic Programming algorithm called REDDP (REgularized Dual Dynamic Programming) to solve nonlinear dynamic programming equations. We extend the algorithm to solve nonlinear stochastic dynamic programming equations. The corresponding algorithm, called SDDP-REG, can be seen as an extension of a regularization of the Stochastic Dual Dynamic Programming (SDDP) algorithm recently introduced which was studied for linear problems only and with less general prox-centers. We show the convergence of REDDP and SDDP-REG. We assess the performance of REDDP and SDDP-REG on portfolio models with direct transaction and market impact costs. In particular, we propose a risk-neutral portfolio selection model which can be cast as a multistage stochastic second-order cone program. The formulation is motivated by the impact of market impact costs on large portfolio rebalancing operations. Numerical simulations show that REDDP is much quicker than DDP on all problem instances considered (up to 184 times quicker than DDP) and that SDDP-REG is quicker on the instances of portfolio selection problems with market impact costs tested and much faster on the instance of risk-neutral multistage stochastic linear program implemented (8.2 times faster).

Citation

Working Paper under submission. Guigues, V., Lejeune, M.A., Tekaya, W. Regularized Decomposition Methods for Deterministic and Stochastic Convex Optimization and Application to Portfolio Selection with Direct Transaction and Market Impact Costs (January 14, 2017). Available at SSRN: https://ssrn.com/abstract=2899448 or http://dx.doi.org/10.2139/ssrn.2899448

Article

Download

View Regularized Stochastic Dual Dynamic Programming for convex nonlinear optimization problems