# A momentum-based linearized augmented Lagrangian method for nonconvex constrained stochastic optimization

Nonconvex constrained stochastic optimization  has emerged in many important application areas. With general functional constraints it minimizes the sum of an expectation function and a convex  nonsmooth  regularizer. Main challenges  arise due to the stochasticity in the random integrand and the possibly nonconvex functional constraints. To cope with these issues we propose a momentum-based linearized augmented Lagrangian  method (MLALM) in this paper. A recursive momentum  is incorporated to calculate the stochastic gradient  and only  one sample is taken at each iteration. Meanwhile, to ease the difficulties in keeping the feasibility to general constraints, based on stochastic gradients we build a stochastic approximation to the linearized augmented Lagrangian function to update primal variables, which are further used to update dual variables in a moving average way. Under a nonsingularity condition on constraints and with a nearly feasible initial point, we establish the $\mathcal O(\epsilon^{-4})$ oracle complexity of MLALM  to find an $\epsilon$-stationary point of the original problem. Numerical experiments on two types of test problems reveal  promising performances of the proposed algorithm.