In this paper, we propose a novel double-proximal augmented Lagrangian method(DP-ALM) for solving a family of linearly constrained convex minimization problems whose objective function is not necessarily smooth. This DP-ALM not only enjoys a flexible dual stepsize, but also contains a proximal subproblem with relatively smaller proximal parameter. By a new prediction-correction reformulation for this DP-ALM and similar variational characterizations for both the saddle-point of the problem and the generated sequences, we establish its global convergence and sublinear convergence rate in both ergodic and nonergodic senses. A toy example is taken to illustrate that the presented lower bound of proximal parameter is optimal (smallest). We also discuss a relaxed accelerated version as well as a linearized version of DP-ALM when the objective function has composite structures. Experiments results on solving two large-scale sparse optimization problems show that our proposed methods outperform some well-established methods. In the appendix, we briefly discuss a multi-block extended DP-ALM and its convergence condition.
Article