Penalized stochastic gradient methods for stochastic convex optimization with expectation constraints

Stochastic gradient method and its variants are simple yet effective for minimizing an expectation function over a closed convex set. However, none of these methods are applicable to solve stochastic programs with expectation constraints, since the projection onto the feasible set is prohibitive. To deal with the expectation constrained stochastic convex optimization problems, we propose a class of penalized stochastic gradient (PSG) methods in which at each iteration a stochastic gradient step is performed along the stochastic (sub)gradients of the objective function and the expectation constraint function. We prove that the basic PSG method and the mini-batch PSG method both converge almost surely to an optimal solution under mild assumptions. The favorable convergence rates of these methods are established by carefully selecting the stepsizes. We also extend PSG to solve the optimization problem with multiple expectation constraints. The efficiency of the proposed methods is validated through numerical experiments in a variety of practical instances.

Citation

Unpublished, Dalian University of Technology, Dalian 116024, China, September 2019.

Article

Download

View PDF