On Rates of Convergence for Stochastic Optimization Problems Under Non-I.I.D. Sampling

In this paper we discuss the issue of solving stochastic optimization problems by means of sample average approximations. Our focus is on rates of convergence of estimators of optimal solutions and optimal values with respect to the sample size. This is a well-studied problem in case the samples are independent and identically distributed (i.e., when standard Monte Carlo simulation is used); here we study the case where that assumption is dropped. Broadly speaking, our results show that, under appropriate assumptions, the rates of convergence for pointwise estimators under a sampling scheme carry over to the optimization case, in the sense that convergence of approximating optimal solutions and optimal values to their true counterparts has the same rates as in pointwise estimation. We apply our results to two well-established sampling schemes, namely, Latin hypercube sampling and randomized quasi-Monte Carlo (QMC). The novelty of our work arises from the fact that, while there has been some work on the use of variance reduction techniques and QMC methods in stochastic optimization, none of the existing work—to the best of our knowledge— has provided a theoretical study on the effect of these techniques on rates of convergence for the optimization problem. We present numerical results for some two-stage stochastic programs from the literature to illustrate the discussed ideas.

Citation

SIAM Journal on Optimization, Vol. 19, No. 2, pp. 524–551