A momentum-based linearized augmented Lagrangian method for nonconvex constrained stochastic optimization

Nonconvex constrained stochastic optimization has emerged in many important application areas. Subject to general functional constraints it minimizes the sum of an expectation function and a nonsmooth regularizer. Main challenges arise due to the stochasticity in the random integrand and the possibly nonconvex functional constraints. To address these issues we propose a momentum-based linearized augmented Lagrangian method (MLALM). MLALM adopts a single-loop framework and incorporates a recursive momentum scheme to compute the stochastic gradient, which enables the construction of a stochastic approximation to the augmented Lagrangian function. We provide an analysis of global convergence of MLALM. Under mild conditions and with unbounded penalty parameters, we show that the sequences of average stationarity measure and constraint violations are convergent in expectation. Under a constraint qualification assumption the sequences of average constraint violation and complementary slackness measure converge to zero in expectation. We also explore properties of those related metrics when penalty parameters are bounded. Furthermore, we investigate oracle complexities of MLALM in terms of total number of stochastic gradient evaluations to find an $\epsilon$-stationary point and an $\epsilon$-KKT point when assuming the constraint qualification. Numerical experiments on two types of test problems reveal promising performances of the proposed algorithm.

Article

Download

View A momentum-based linearized augmented Lagrangian method for nonconvex constrained stochastic optimization