We study a class of stochastic programs where some of the elements in the objective function are random, and their probability distribution has unknown parameters. The goal is to find a good estimate for the optimal solution of the stochastic program using data sampled from the distribution of the random elements. We investigate two common optimization criteria for evaluating the quality of a solution estimator, one based on the difference in objective values, and the other based on the Euclidean distance between solutions. We use "risk" as the expected value of such criteria over the sample space. Under a Bayesian framework, where a prior distribution is assumed for the unknown parameters, two natural estimation-optimization strategies arise. A "separate" scheme first finds an estimator for the unknown parameters, and then uses this estimator in the optimization problem. A "joint" scheme combines the estimation and optimization steps by directly adjusting the distribution in the stochastic program. We analyze the risk difference between the solutions obtained from these two schemes for several classes of stochastic programs, while providing insight on the computational effort to solve these problems.
Article
View Computational Aspects of Bayesian Solution Estimators in Stochastic Optimization