New Constraint Qualifications with Second-Order Properties in Nonlinear Optimization

In this paper we present and discuss new constraint qualifications to ensure the validity of well known second-order properties in nonlinear optimization. Here, we discuss conditions related to the so-called basic second-order condition, where a new notion of polar pairing is introduced in order to replace the polar operation, useful in the first-order case. We … Read more

Optimality Conditions and Constraint Qualifications for Generalized Nash Equilibrium Problems and their Practical Implications

Generalized Nash Equilibrium Problems (GNEPs) are a generalization of the classic Nash Equilibrium Problems (NEPs), where each player’s strategy set depends on the choices of the other players. In this work we study constraint qualifications and optimality conditions tailored for GNEPs and we discuss their relations and implications for global convergence of algorithms. Surprisingly, differently … Read more

On the behavior of Lagrange multipliers in convex and non-convex infeasible interior point methods

This paper analyzes sequences generated by infeasible interior point methods. In convex and non-convex settings, we prove that moving the primal feasibility at the same rate as complementarity will ensure that the Lagrange multiplier sequence will remain bounded, provided the limit point of the primal sequence has a Lagrange multiplier, without constraint qualification assumptions. We … Read more

An extension of Yuan’s Lemma and its applications in optimization

We prove an extension of Yuan’s Lemma to more than two matrices, as long as the set of matrices has rank at most 2. This is used to generalize the main result of [A. Baccari and A. Trad. On the classical necessary second-order optimality conditions in the presence of equality and inequality constraints. SIAM J. … Read more

Optimality condition and complexity analysis for linearly-constrained optimization without differentiability on the boundary

In this paper we consider the minimization of a continuous function that is potentially not differentiable or not twice differentiable on the boundary of the feasible region. By exploiting an interior point technique, we present first- and second-order optimality conditions for this problem that reduces to classical ones when the derivative on the boundary is … Read more

Some theoretical limitations of second-order algorithms for smooth constrained optimization

In second-order algorithms, we investigate the relevance of the constant rank of the full set of active constraints in ensuring global convergence to a second-order stationary point. We show that second-order stationarity is not expected in the non-constant rank case if the growth of the so-called tangent multipliers, associated with a second-order complementarity measure, is … Read more

Augmented Lagrangians with constrained subproblems and convergence to second-order stationary points

Augmented Lagrangian methods with convergence to second-order stationary points in which any constraint can be penalized or carried out to the subproblems are considered in this work. The resolution of each subproblem can be done by any numerical algorithm able to return approximate second-order stationary points. The developed global convergence theory is stronger than the … Read more

A second-order optimality condition with first- and second-order complementarity associated with global convergence of algorithms

We develop a new notion of second-order complementarity with respect to the tangent subspace related to second-order necessary optimality conditions by the introduction of so-called tangent multipliers. We prove that around a local minimizer, a second-order stationarity residual can be driven to zero while controlling the growth of Lagrange multipliers and tangent multipliers, which gives … Read more

On a conjecture in second-order optimality conditions

In this paper we deal with optimality conditions that can be verified by a nonlinear optimization algorithm, where only a single Lagrange multiplier is avaliable. In particular, we deal with a conjecture formulated in [R. Andreani, J.M. Martinez, M.L. Schuverdt, “On second-order optimality conditions for nonlinear programming”, Optimization, 56:529–542, 2007], which states that whenever a … Read more

A second-order sequential optimality condition associated to the convergence of optimization algorithms

Sequential optimality conditions have recently played an important role on the analysis of the global convergence of optimization algorithms towards first-order stationary points and justifying their stopping criteria. In this paper we introduce the first sequential optimality condition that takes into account second-order information. We also present a companion constraint qualification that is less stringent … Read more