Indefinite linearized augmented Lagrangian method for convex programming with linear inequality constraints

The augmented Lagrangian method (ALM) is a benchmark for tackling the convex optimization problem with linear constraints; ALM and its variants for linearly equality-constrained convex minimization models have been well studied in the literatures. However, much less attention has been paid to ALM for efficiently solving the linearly inequality-constrained convex minimization model. In this paper, we exploit an enlightening reformulation of the most recent indefinite linearized (equality-constrained) ALM, and present a novel indefinite linearized ALM scheme for efficiently solving the convex optimization problem with linear inequality constraints. The proposed method enjoys great advantages, especially for large-scale optimization cases, in two folds mainly: first, it significantly simplifies the optimization of the challenging key subproblem of the classical ALM by employing its linearized reformulation, while keeping low complexity in computation; second, we prove that a smaller proximity regularization term is needed for convergence guarantee, which allows a bigger step-size and can largely reduce required iterations for convergence. Moreover, we establish an elegant global convergence theory of the proposed scheme upon its equivalent compact expression of prediction-correction, along with a worst-case $\mathcal{O}(1/N)$ convergence rate. Numerical results demonstrate that the proposed method can reach a faster converge rate for a higher numerical efficiency as the regularization term turns smaller, which confirms the theoretical results presented in this study.

Article

Download

View PDF