Variance-reduced first-order methods for deterministically constrained stochastic nonconvex optimization with strong convergence guarantees

\(\)

In this paper, we study a class of deterministically constrained stochastic optimization problems. Existing methods typically aim to find an \(\epsilon\)-stochastic stationary point, where the expected violations of both constraints and first-order stationarity are within a prescribed accuracy \(\epsilon\). However, in many practical applications, it is crucial that the constraints be nearly satisfied with certainty, making such an \(\epsilon\)-stochastic stationary point potentially undesirable due to the risk of significant constraint violations. To address this issue, we propose single-loop variance-reduced stochastic first-order methods, where the stochastic gradient of the stochastic component is computed using either a truncated recursive momentum scheme or a truncated Polyak momentum scheme for variance reduction, while the gradient of the deterministic component is computed exactly. Under the error bound condition with a parameter \(\theta \geq 1\) and other suitable assumptions, we establish that the proposed methods achieve a sample complexity and first-order operation complexity of \(\widetilde O(\epsilon^{-\max\{4, 2\theta\}})\) for finding a stronger \(\epsilon\)-stochastic stationary point, where the constraint violation is within \(\epsilon\) with certainty, and the expected violation of first-order stationarity is within \(\epsilon\). To the best of our knowledge, this is the first work to develop methods with provable complexity guarantees for finding an approximate stochastic stationary point of such problems that nearly satisfies all constraints with certainty.

Citation

Preprint

Article

Download

View Variance-reduced first-order methods for deterministically constrained stochastic nonconvex optimization with strong convergence guarantees