Optimal values and solutions of empirical approximations of stochastic optimization problems can be viewed as statistical estimators of their true values. From this perspective, it is important to understand the asymptotic behavior of these estimators as the sample size goes to infinity, which is both of theoretical as well as practical interest. This area of study has a long tradition in stochastic programming. However, the literature is lacking consistency analysis for problems in which the decision variables are taken from an infinite-dimensional space, which arise in optimal control, scientific machine learning, and statistical estimation. By exploiting the typical problem structures found in these applications that give rise to hidden norm compactness properties for solution sets, we prove consistency results for nonconvex risk-averse stochastic optimization problems formulated in infinite-dimensional space. The proof is based on several crucial results from the theory of variational convergence. The theoretical results are demonstrated for several important problem classes arising in the literature.
Citation
unpublished, submitted