A generalized asymmetric forward-backward-adjoint algorithm for convex-concave saddle-point problem

The convex-concave minimax problem, known as the saddle-point problem, has been extensively studied from various aspects including the algorithm design, convergence condition and complexity. In this paper, we propose a generalized asymmetric forward-backward-adjoint (G-AFBA) algorithm to solve such a problem by utilizing both the proximal techniques and the interactive information of primal-dual updates. Except enjoying proximal subproblems, G-AFBA exploits a more relaxed convergence condition, namely, more flexible and larger proximal stepsizes, which could result in significant improvements in numerical performance. We establish the global convergence and $O(1/T)$ convergence rate of G-AFBA in the ergodic and pointwise senses based on an equivalent prediction-correction interpretation, where $T$ denotes the iteration number. The linear convergence rate of G-AFBA is also established under a specific calmness condition. Additionally, we propose a modified G-AFBA with $O(1/T^2)$ convergence rate. Relationships between our G-AFBA and several well-established methods are analyzed concisely. Finally, we discuss the application of the proposed G-AFBA to a multi-block separable convex optimization problem and a stochastic G-AFBA for solving a well-known machine learning problem in the appendix.

Article

Download

View A generalized asymmetric forward-backward-adjoint algorithm for convex-concave saddle-point problem