Minimax optimization has become a central tool for modern machine learning with applications in generative adversarial networks, robust optimization, reinforcement learning, etc. These applications are often nonconvex-nonconcave, but the existing theory is unable to identify and deal with the fundamental difficulties posed by nonconvex-nonconcave structures. In this paper, we study the classic proximal point method (PPM) for solving nonconvex-nonconcave minimax problems. We develop a new analytic tool, the saddle envelope, generalizing the Moreau envelope. The saddle envelope not only smooths the objective but can convexify and concavify it based on the level of interaction present between the minimizing and maximizing variables. From this, we identify three distinct regions of nonconvex-nonconcave minimax problems. For problems where interaction is sufficiently strong, we derive global linear convergence guarantees. Conversely when the interaction is fairly weak, we derive local linear convergence guarantees with a proper initialization. Between these two settings, we show that PPM may diverge or converge to a limit cycle and present a ``Lyapunov''-type function limiting how quickly PPM can diverge.
Citation
Dec 2020
Article
View The Landscape of the Proximal Point Method for Nonconvex-Nonconcave Minimax Optimization