Strong global convergence properties of algorithms for nonlinear symmetric cone programming

Sequential optimality conditions have played a major role in proving strong global convergence properties of numerical algorithms for many classes of optimization problems. In particular, the way complementarity is dealt is fundamental to achieve a strong condition. Typically, one uses the inner product structure to measure complementarity, which gives a very general approach to a … Read more

A relaxed quasinormality condition and the boundedness of dual augmented Lagrangian sequences

Global convergence of augmented Lagrangian methods to a first-order stationary point is well-known to hold under considerably weak constraint qualifications. In particular, several constant rank-type conditions have been introduced for this purpose which turned out to be relevant also beyond this scope. In this paper we show that in fact under these conditions the sequence … Read more

On enhanced KKT optimality conditions for smooth nonlinear optimization

The Fritz-John (FJ) and KKT conditions are fundamental tools for characterizing minimizers and form the basis of almost all methods for constrained optimization. Since the seminal works of Fritz John, Karush, Kuhn and Tucker, FJ/KKT conditions have been enhanced by adding extra necessary conditions. Such an extension was initially proposed by Hestenes in the 1970s … Read more

A practical second-order optimality condition for cardinality-constrained problems with application to an augmented Lagrangian method

This paper addresses the mathematical programs with cardinality constraints (MPCaC). We first define two new tailored (strong and weak) second-order necessary conditions, MPCaC-SSONC and MPCaC-WSONC. We then propose a constraint qualification (CQ), namely, MPCaC-relaxed constant rank constraint qualification (MPCaC-RCRCQ), and establish the validity of MPCaC-SSONC at minimizers under this new CQ. All the concepts proposed … Read more

Improving the global convergence of Inexact Restoration methods for constrained optimization problems

Inexact restoration (IR) methods are an important family of numerical methods for solving constrained optimization problems with applications to electronic structures and bilevel programming among others areas. In these methods, the minimization is divided in two phases: decreasing infeasibility (feasibility phase) and improving optimality (optimality phase). The feasibility phase does not require the generated points … Read more

An extended delayed weighted gradient algorithm for solving strongly convex optimization problems

The recently developed delayed weighted gradient method (DWGM) is competitive with the well-known conjugate gradient (CG) method for the minimization of strictly convex quadratic functions. As well as the CG method, DWGM has some key optimality and orthogonality properties that justify its practical performance. The main difference with the CG method is that, instead of … Read more

Convergence of Quasi-Newton Methods for Solving Constrained Generalized Equations

In this paper, we focus on quasi-Newton methods to solve constrained generalized equations. As is well-known, this problem was firstly studied by Robinson and Josephy in the 70’s. Since then, it has been extensively studied by many other researchers, specially Dontchev and Rockafellar. Here, we propose two Broyden-type quasi-Newton approaches to dealing with constrained generalized … Read more

On scaled stopping criteria for a safeguarded augmented Lagrangian method with theoretical guarantees

This paper discusses the use of a stopping criterion based on the scaling of the Karush-Kuhn-Tucker (KKT) conditions by the norm of the approximate Lagrange multiplier in the ALGENCAN implementation of a safeguarded augmented Lagrangian method. Such stopping criterion is already used in several nonlinear programming solvers, but it has not yet been considered in … Read more

On the best achievable quality of limit points of augmented Lagrangian schemes

The optimization literature is vast in papers dealing with improvements on the global convergence of augmented Lagrangian schemes. Usually, the results are based on weak constraint qualifications, or, more recently, on sequential optimality conditions obtained via penalization techniques. In this paper we propose a somewhat different approach, in the sense that the algorithm itself is … Read more

On the use of Jordan Algebras for improving global convergence of an Augmented Lagrangian method in nonlinear semidefinite programming

Jordan Algebras are an important tool for dealing with semidefinite programming and optimization over symmetric cones in general. In this paper, a judicious use of Jordan Algebras in the context of sequential optimality conditions is done in order to generalize the global convergence theory of an Augmented Lagrangian method for nonlinear semidefinite programming. An approximate … Read more