An adaptive single-loop stochastic penalty method for nonconvex constrained stochastic optimization

Adaptive update schemes for penalty parameters are crucial to enhancing robustness and practical applicability of penalty methods for constrained optimization. However, in the context of general constrained stochastic optimization, additional challenges arise due to the randomness introduced by adaptive penalty parameters. To address these challenges, we propose an Adaptive Single-loop Stochastic Penalty method (AdaSSP) in this paper. AdaSSP employs a single-loop algorithmic framework with dynamically updated penalty parameters based on the behavior of iterates. It combines a recursive momentum technique along with clipped stochastic gradient computations to potentially reduce the random variance caused by stochasticity. We present a high-probability oracle complexity analysis for AdaSSP to reach an $\epsilon$-KKT point. We also investigate thein-expectation global convergence regarding the KKT residual at iterates when the penalty parameter sequence is unbounded and bounded, respectively. Finally, preliminary numerical results are reported, revealing the promising performance of the proposed method.

Article

Download

View PDF