An Exact Penalty Method for Stochastic Equality-Constrained Optimization

In this paper, we study a penalty method for stochastic equality-constrained optimization, where both the objective and constraints are expressed in general expectation form. We introduce a novel adaptive strategy for updating the penalty parameter, guided by iteration progress to balance reductions in the penalty function with improvements in constraint violation, while each penalty subproblem is approximately solved using a truncated stochastic prox-linear algorithm. Under certain conditions, the proposed penalty method terminates after finitely many iterations, rendering its exactness: an approximate stationary point of the penalty function corresponds to an approximate KKT point of the original problem. We establish oracle complexity for finding a stochastic $\epsilon$-KKT point, requiring $\mathcal{O}(\epsilon^{-3})$ stochastic gradient evaluations for the objective and constraints, along with $\mathcal{O}(\epsilon^{-5})$ stochastic function evaluations for the constraints. We further develop variants for problems with stochastic objective and deterministic constraints and for problems with finite-sum objective and constraints, each with matching complexity analyses. Finally, we report numerical results demonstrating the proposed method’s performance on a test problem.

Article

Download

View PDF