From Estimation to Optimization via Shrinkage

We study a class of quadratic stochastic programs where the distribution of random variables has unknown parameters. A traditional approach is to estimate the parameters using a maximum likelihood estimator (MLE) and to use this as input in the optimization problem. For the unconstrained case, we show that an estimator that “shrinks” the MLE towards an arbitrary vector yields a uniformly better risk than the MLE. In contrast, when there are constraints, we show that the MLE is admissible.

Citation

https://doi.org/10.1016/j.orl.2017.10.005

Article

Download

View From Estimation to Optimization via Shrinkage