Global convergence of a second-order augmented Lagrangian method under an error bound condition

This work deals with convergence to points satisfying the weak second-order necessary optimality conditions of a second-order safeguarded augmented Lagrangian method from the literature. To this end, we propose a new second-order sequential optimality condition that is, in a certain way, based on the iterates generated by the algorithm itself. This also allows us to establish the best possible global convergence result for the method studied, from which a companion constraint qualification is derived. The companion constraint qualification is independent of the Mangasarian-Fromovitz and constant-rank constraint qualifications and remains verifiable without them, as it can be certified by other known constraint qualifications. Furthermore, unlike similar results from previous works, the new constraint qualification cannot be weakened by another one with second-order global convergence guarantees for the method and assures second-order stationarity without the need for constant rank hypotheses. To guarantee the latter result, we established the convergence of the method under a property slightly stronger than the error bound constraint qualification, which, until now, has not been known to be associated with nonlinear optimization methods.

Article

Download

View PDF