An algorithm is proposed, analyzed, and tested experimentally for solving stochastic optimization problems in which the decision variables are constrained to satisfy equations defined by deterministic, smooth, and nonlinear functions. It is assumed that constraint function and derivative values can be computed, but that only stochastic approximations are available for the objective function and its derivatives. The algorithm is of the sequential quadratic optimization variety. A distinguishing feature of the algorithm is that it allows inexact subproblem solutions to be employed, which is particularly useful in large-scale settings when the matrices defining the subproblems are too large to form and/or factorize. Conditions are imposed on the inexact subproblem solutions that account for the fact that only stochastic objective gradient estimates are available. Convergence results in expectation are established for the method. Numerical experiments show that it outperforms an alternative algorithm that employs highly accurate subproblem solutions in every iteration.

## Citation

COR@L Technical Report 21T-015 Lehigh University Industrial and Systems Engineering July 7, 2021