Augmented Lagrangians with constrained subproblems and convergence to second-order stationary points

Augmented Lagrangian methods with convergence to second-order stationary points in which any constraint can be penalized or carried out to the subproblems are considered in this work. The resolution of each subproblem can be done by any numerical algorithm able to return approximate second-order stationary points. The developed global convergence theory is stronger than the … Read more

A second-order optimality condition with first- and second-order complementarity associated with global convergence of algorithms

We develop a new notion of second-order complementarity with respect to the tangent subspace related to second-order necessary optimality conditions by the introduction of so-called tangent multipliers. We prove that around a local minimizer, a second-order stationarity residual can be driven to zero while controlling the growth of Lagrange multipliers and tangent multipliers, which gives … Read more

On a conjecture in second-order optimality conditions

In this paper we deal with optimality conditions that can be verified by a nonlinear optimization algorithm, where only a single Lagrange multiplier is avaliable. In particular, we deal with a conjecture formulated in [R. Andreani, J.M. Martinez, M.L. Schuverdt, “On second-order optimality conditions for nonlinear programming”, Optimization, 56:529–542, 2007], which states that whenever a … Read more

A second-order sequential optimality condition associated to the convergence of optimization algorithms

Sequential optimality conditions have recently played an important role on the analysis of the global convergence of optimization algorithms towards first-order stationary points and justifying their stopping criteria. In this paper we introduce the first sequential optimality condition that takes into account second-order information. We also present a companion constraint qualification that is less stringent … Read more

On Second Order Optimality Conditions in Nonlinear Optimization

In this work we present new weak conditions that ensure the validity of necessary second order optimality conditions (SOC) for nonlinear optimization. We are able to prove that weak and strong SOCs hold for all Lagrange multipliers using Abadie-type assumptions. We also prove weak and strong SOCs for at least one Lagrange multiplier imposing the … Read more

A Flexible Inexact Restoration Method and Application to Optimization with Multiobjective Constraints under Weighted-Sum Scalarization

We introduce a new flexible Inexact-Restoration (IR) algorithm and an application to problems with multiobjective constraints (MOCP) under the weighted-sum scalarization approach. In IR methods each iteration has two phases. In the first phase one aims to improve the feasibility and, in the second phase, one minimizes a suitable objective function. This is done in … Read more

Approximate-KKT stopping criterion when Lagrange multipliers are not available

In this paper we investigate how to efficiently apply Approximate-Karush-Kuhn-Tucker (AKKT) proximity measures as stopping criteria for optimization algorithms that do not generate approximations to Lagrange multipliers, in particular, Genetic Algorithms. We prove that for a wide range of constrained optimization problems the KKT error measurement tends to zero. We also develop a simple model … Read more

Primal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization

We consider the minimization of a convex function on a compact polyhedron defined by linear equality constraints and nonnegative variables. We define the Levenberg-Marquardt (L-M) and central trajectories starting at the analytic center and using the same parameter, and show that they satisfy a primal-dual relationship, being close to each other for large values of … Read more

Two new weak constraint qualifications and applications

We present two new constraint qualifications (CQ) that are weaker than the recently introduced Relaxed Constant Positive Linear Depen- dence (RCPLD) constraint qualification. RCPLD is based on the assump- tion that many subsets of the gradients of the active constraints preserve positive linear dependence locally. A major open question was to identify the exact set … Read more

A relaxed constant positive linear dependence constraint qualification and applications

In this work we introduce a relaxed version of the constant positive linear dependence constraint qualification (CPLD) that we call RCPLD. This development is inspired by a recent generalization of the constant rank constraint qualification from Minchenko and Stakhovski that was called RCR. We show that RCPLD is enough to ensure the convergence of an … Read more