On the convergence of augmented Lagrangian strategies for nonlinear programming

Augmented Lagrangian algorithms are very popular and successful methods for solving constrained optimization problems. Recently, the global convergence analysis of these methods have been dramatically improved by using the notion of the sequential optimality conditions. Such conditions are optimality conditions independently of the fulfilment of any constraint qualifications and provide theoretical tools to justify stopping … Read more

Approximate-KKT stopping criterion when Lagrange multipliers are not available

In this paper we investigate how to efficiently apply Approximate-Karush-Kuhn-Tucker (AKKT) proximity measures as stopping criteria for optimization algorithms that do not generate approximations to Lagrange multipliers, in particular, Genetic Algorithms. We prove that for a wide range of constrained optimization problems the KKT error measurement tends to zero. We also develop a simple model … Read more

A new sequential optimality condition for constrained optimization and algorithmic consequences

Necessary first-order sequential optimality conditions provide adequate theoretical tools to justify stopping criteria for nonlinear programming solvers. These conditions are satisfied by local minimizers of optimization problems independently of the fulfillment of constraint qual- i cations. A new strong sequential optimality condition is introduced in the present paper. A proof that a well established Augmented Lagrangian … Read more

On sequential optimality conditions for smooth constrained optimization

Sequential optimality conditions provide adequate theoretical tools to justify stopping criteria for nonlinear programming solvers. Approximate KKT and Approximate Gradient Projection conditions are analyzed in this work. These conditions are not necessarily equivalent. Implications between different conditions and counter-examples will be shown. Algorithmic consequences will be discussed. Article Download View On sequential optimality conditions for … Read more

New stopping criteria for detecting infeasibility in conic optimization

Detecting infeasibility in conic optimization and providing certificates for infeasibility pose a bigger challenge than in the linear case due to the lack of strong duality. In this paper we generalize the approximate Farkas lemma of Todd and Ye from the linear to the general conic setting, and use it to propose stopping criteria for … Read more