A Regularized Smoothing Method for Fully Parameterized Convex Problems with Applications to Convex and Nonconvex Two-Stage Stochastic Programming

We present an approach to regularize and approximate solution mappings of parametric convex optimization problems that combines interior penalty (log-barrier) solutions with Tikhonov regularization. Because the regularized mappings are single-valued and smooth under reasonable conditions, they can be used to build a computationally practical smoothing for the associated optimal value function. The value function in question, while resulting from parameterized convex problems, need not be convex. One motivating application of interest is two-stage (possibly nonconvex) stochastic programming. We show that our approach, being computationally implementable, provides locally bounded upper bounds for the subdifferential of the value function of qualified convex problems. As a by-product of our development, we also recover that in the given setting the value function is locally Lipschitz continuous. Numerical experiments are presented for two-stage convex stochastic programming problems, comparing the approach with the bundle method for nonsmooth optimization.

Citation

Submitted

Article

Download

View A Regularized Smoothing Method for Fully Parameterized Convex Problems with Applications to Convex and Nonconvex Two-Stage Stochastic Programming