Double-proximal augmented Lagrangian methods with improved convergence condition

In this paper, we consider a family of linearly constrained convex minimization problems whose objective function is not necessarily smooth. A preliminary double-proximal augmented Lagrangian method (DP-ALM) is developed, which enjoys a flexible dual stepsize and a proximal subproblem with relatively smaller proximal parameter. By a novel prediction-correction reformulation for the proposed DP-ALM and by similar variational characterizations for both the saddle-point of the problem and the generated sequences, we establish the global convergence of DP-ALM and its sublinear convergence rate in both ergodic and nonergodic senses. An toy example is taken to illustrate that the presented lower bound of proximal parameter is optimal. Besides, we briefly discuss a relaxed accelerated DP-ALM and the multi-block extension of DP-ALM as well as their convergence conditions.

Article

Download

View Double-proximal augmented Lagrangian methods with improved convergence condition