Generalized asymmetric forward-backward-adjoint algorithms for convex-concave saddle-point problem

The convex-concave minimax problem, also known as the saddle-point problem, has been extensively studied from various aspects including the algorithm design, convergence condition and complexity.
In this paper, we propose a generalized asymmetric forward-backward-adjoint algorithm (G-AFBA) to solve such a problem by utilizing both the proximal techniques and the extrapolation of primal-dual updates.
Besides applying proximal primal-dual updates, G-AFBA enjoys a more relaxed convergence condition, namely, more flexible and possibly larger proximal stepsizes, which could result in significant improvements in numerical performance. We study the global convergence of G-AFBA as well as its sublinear convergence rate on both ergodic iterates and non-ergodic optimality error. The linear convergence rate of G-AFBA is also established under a calmness condition. By different ways of parameter and problem setting, we show that G-AFBA has close relationships with {several} well-established or new algorithms. We further propose an adaptive and a stochastic (inexact) versions of G-AFBA. Our numerical experiments on solving the robust principal component analysis problem and the 3D CT reconstruction problem indicate the efficiency of both the deterministic and stochastic versions of G-AFBA.

Article

Download

View PDF