In this paper, we consider a family of linearly constrained convex minimization problems whose objective function is not necessarily smooth. A basic double-proximal augmented Lagrangian method (DP-ALM) is developed, which enjoys a flexible dual stepsize and a proximal subproblem with relatively smaller proximal parameter. By a novel prediction-correction reformulation for the proposed DP-ALM and by similar variational characterizations for both the saddle-point of the problem and the generated sequences, we establish the global convergence of DP-ALM and its sublinear convergence rate in both ergodic and nonergodic senses. An toy example is taken to illustrate that the presented lower bound of proximal parameter is optimal. Besides, we briefly discuss a relaxed accelerated DP-ALM, a linearized DP-ALM and a multi-block extended DP-ALM as well as their convergence conditions.
Article
View Double-proximal augmented Lagrangian methods with improved convergence condition