A second-order sequential optimality condition associated to the convergence of optimization algorithms

Sequential optimality conditions have recently played an important role on the analysis of the global convergence of optimization algorithms towards first-order stationary points and justifying their stopping criteria. In this paper we introduce the first sequential optimality condition that takes into account second-order information. We also present a companion constraint qualification that is less stringent than previous ones associated to the convergence of second-order algorithms, like the joint condition Mangasarian-Fromovitz and Weak Constant Rank. Our condition is also weaker than the classical Constant Rank Constraint Qualification, which associates this condition to the convergence of second-order algorithms. This means that we can prove second-order global convergence of well stablished algorithms even when the set of Lagrange multipliers is unbounded, which overcomes a limitation of previous results based on MFCQ. We prove global convergence of well known variations of the augmented Lagrangian and Regularized SQP methods to second-order stationary points under this new weak constraint qualification.

Article

Download

View PDF