Extending the Reach of First-Order Algorithms for Nonconvex Min-Max Problems with Cohypomonotonicity

\(\)

We focus on constrained, \(L\)-smooth, nonconvex-nonconcave min-max problems either satisfying \(\rho\)-cohypomonotonicity or admitting a solution to the \(\rho\)-weakly Minty Variational Inequality (MVI), where larger values of the parameter \(\rho>0\) correspond to a greater degree of nonconvexity. These problem classes include examples in two player reinforcement learning, interaction dominant min-max problems, and certain synthetic test problems on which classical min-max algorithms fail. It has been conjectured that first-order methods can tolerate value of \(\rho\) no larger than \(\frac{1}{L}\), but existing results in the literature have stagnated at the tighter requirement \(\rho < \frac{1}{2L}\). With a simple argument, we obtain optimal or best-known complexity guarantees with cohypomonotonicity or weak MVI conditions for \(\rho < \frac{1}{L}\). The algorithms we analyze are inexact variants of Halpern and Krasnosel'skii-Mann (KM) iterations. We also provide algorithms and complexity guarantees in the stochastic case with the same range on \(\rho\). Our main insight for the improvements in the convergence analyses is to harness the recently proposed "conic nonexpansiveness" property of operators. As byproducts, we provide a refined analysis for inexact Halpern iteration and propose a stochastic KM iteration with a multilevel Monte Carlo estimator.

Article

Download

View Extending the Reach of First-Order Algorithms for Nonconvex Min-Max Problems with Cohypomonotonicity