Global convergence of a second-order augmented Lagrangian method under an error bound condition

This work deals with convergence to points satisfying the weak second-order necessary optimality conditions of a second-order safeguarded augmented Lagrangian method from the literature. To this end, we propose a new second-order sequential optimality condition that is, in a certain way, based on the iterates generated by the algorithm itself. This also allows us to establish the best possible global convergence result for the method studied, from which a companion constraint qualification is derived. Unlike similar results from previous works, the new constraint qualification assures second-order stationarity without the need for constant rank hypotheses. To guarantee this result, we established the convergence of the method under a property slightly stronger than the error bound constraint qualification, which, until now, has not been known to be associated with nonlinear optimization methods.

Article

Download

View Global convergence of a second-order augmented Lagrangian method under an error bound condition