The Rate of Convergence of Augmented Lagrange Method for a Composite Optimization Problem

In this paper we analyze the rate of local convergence of the augmented Lagrange method for solving optimization problems with equality constraints and the objective function expressed as the sum of a convex function and a twice continuously differentiable function. The presence of the non-smoothness of the convex function in the objective requires extensive tools such as the second-order variational analysis on the Moreau-Yosida regularization of a convex function and matrix techniques for estimating the generalized Hessian of the dual function defined by the augmented Lagrange. With two conditions, we prove that, the rate of convergence for the augmented Lagrange method is linear and the ratio constant is proportional to $1/c$, where $c$ is the penalty parameter that exceeds a threshold $\overline c > 0$. As an illustrative example, for nonlinear semidefinite programming problem, we show that the assumptions used in Sun et al. \cite{SSZhang2008}, for the rate of convergence of the augmented Lagrange method, are sufficient to the two conditions adopted in this paper.

Article

Download

View The Rate of Convergence of Augmented Lagrange Method for a Composite Optimization Problem