A symmetric extrapolated proximal alternating predictor-corrector method for saddle-point problems

The proximal alternating predictor-corrector (PAPC) method is a widely used first-order algorithm for solving convex-concave saddle-point problems involving both smooth and nonsmooth components. Unlike the primal-dual hybrid gradient (PDHG) method, which incorporates an extrapolation step with parameter $\theta \in (0,1]$ to improve convergence, the existing convergence analysis of PAPC has been limited to the case $\theta = 1$. As a result, the behavior of PAPC under general extrapolation parameters remains largely unexplored. Moreover, despite the intrinsic symmetry between primal and dual variables in saddle-point formulations, the classical PAPC employs an asymmetric update scheme that does not exploit this structure.
In this paper, we introduce a simple yet effective extrapolation step into the PAPC framework, resulting in a new method termed the symmetric proximal alternating predictor-corrector (SPAPC) algorithm. The proposed scheme incorporates extrapolation symmetrically in both primal and dual updates, yielding a balanced and structurally coherent iteration. We demonstrate that this modification allows for relaxed step-size conditions compared to those required by the classical PAPC method. Moreover, we establish the global convergence of SPAPC and provide both ergodic and nonergodic convergence rate guarantees. Numerical experiments further validate the improved efficiency of the proposed method.

Citation

Lihan Zhou, Feng Ma. A symmetric extrapolated proximal alternating predictor-corrector method for saddle-point problems[J]. Avaliable on http://www. optimization-online. org, 2025.

Article

Download

View PDF