We present adaptive sequential SAA (sample average approximation) algorithms to solve large-scale two-stage stochastic linear programs. The iterative algorithm framework we propose is organized into \emph{outer} and \emph{inner} iterations as follows: during each outer iteration, a sample-path problem is implicitly generated using a sample of observations or ``scenarios," and solved only \emph{imprecisely}, to within a tolerance chosen \emph{adaptively}, by balancing the estimated statistical error against solution error. The solutions from prior iterations serve as warm starts to aid efficient solution of the (piecewise linear convex optimization) sample-path problems generated on subsequent iterations. The generated scenarios can be independent and identically distributed (iid), or dependent, as in Monte Carlo generation using Latin-hypercube sampling, antithetic variates, or quasi-Monte Carlo. We characterize the almost sure and L1-norm convergence behavior of the distance of the generated stochastic iterates to the true solution set. We also characterize the corresponding convergence rate, and a sample size schedule that results in the best possible work complexity rate; the latter rate is Monte Carlo canonical and analogous to the $\mathcal{O}(\epsilon^{-2})$ optimal complexity for non-smooth convex optimization. We report extensive numerical tests that clearly indicate favorable performance over existing methods, due primarily to the use of a sequential framework, use of optimal sample sizes, and warm starts. The framework can be stopped in finite-time to return a solution endowed with a probabilistic guarantee on quality.
Citation
Submitted for publication