Global convergence of a second-order augmented Lagrangian method under an error bound condition

This work deals with convergence to points satisfying the weak second-order necessary optimality conditions of a second-order safeguarded augmented Lagrangian method from the literature. To this end, we propose a new second-order sequential optimality condition that is, in a certain way, based on the iterates generated by the algorithm itself. This also allows us to … Read more

Global convergence of an augmented Lagrangian method for nonlinear programming via Riemannian optimization

Considering a standard nonlinear programming problem, one may view a subset of the equality constraints as an embedded Riemannian manifold. In this paper we investigate the differences between the Euclidean and the Riemannian approach for this problem. It is well known that the linear independence constraint qualification for both approaches are equivalent. However, when considering … Read more

Strong global convergence properties of algorithms for nonlinear symmetric cone programming

Sequential optimality conditions have played a major role in proving strong global convergence properties of numerical algorithms for many classes of optimization problems. In particular, the way complementarity is dealt is fundamental to achieve a strong condition. Typically, one uses the inner product structure to measure complementarity, which gives a very general approach to a … Read more

A relaxed quasinormality condition and the boundedness of dual augmented Lagrangian sequences

Global convergence of augmented Lagrangian methods to a first-order stationary point is well-known to hold under considerably weak constraint qualifications. In particular, several constant rank-type conditions have been introduced for this purpose which turned out to be relevant also beyond this scope. In this paper we show that in fact under these conditions subsequences of … Read more

Constraint qualifications and strong global convergence properties of an augmented Lagrangian method on Riemannian manifolds

In the past years, augmented Lagrangian methods have been successfully applied to several classes of non-convex optimization problems, inspiring new developments in both theory and practice. In this paper we bring most of these recent developments from nonlinear programming to the context of optimization on Riemannian manifolds, including equality and inequality constraints. Many research have … Read more

A minimal face constant rank constraint qualification for reducible conic programming

In a previous paper [R. Andreani, G. Haeser, L. M. Mito, H. Ramírez, T. P. Silveira. First- and second-order optimality conditions for second-order cone and semidefinite programming under a constant rank condition. Mathematical Programming, 2023. DOI: 10.1007/s10107-023-01942-8] we introduced a constant rank constraint qualification for nonlinear semidefinite and second-order cone programming by considering all faces … Read more

On enhanced KKT optimality conditions for smooth nonlinear optimization

The Fritz-John (FJ) and KKT conditions are fundamental tools for characterizing minimizers and form the basis of almost all methods for constrained optimization. Since the seminal works of Fritz John, Karush, Kuhn and Tucker, FJ/KKT conditions have been enhanced by adding extra necessary conditions. Such an extension was initially proposed by Hestenes in the 1970s … Read more

Improving the global convergence of Inexact Restoration methods for constrained optimization problems

Inexact restoration (IR) methods are an important family of numerical methods for solving constrained optimization problems with applications to electronic structures and bilevel programming among others areas. In these methods, the minimization is divided in two phases: decreasing infeasibility (feasibility phase) and improving optimality (optimality phase). The feasibility phase does not require the generated points … Read more

Global Convergence of Algorithms Under Constant Rank Conditions for Nonlinear Second-Order Cone Programming

In [R. Andreani, G. Haeser, L. M. Mito, H. Ramírez C., Weak notions of nondegeneracy in nonlinear semidefinite programming, arXiv:2012.14810, 2020] the classical notion of nondegeneracy (or transversality) and Robinson’s constraint qualification have been revisited in the context of nonlinear semidefinite programming exploiting the structure of the problem, namely, its eigendecomposition. This allows formulating the … Read more

An extended delayed weighted gradient algorithm for solving strongly convex optimization problems

The recently developed delayed weighted gradient method (DWGM) is competitive with the well-known conjugate gradient (CG) method for the minimization of strictly convex quadratic functions. As well as the CG method, DWGM has some key optimality and orthogonality properties that justify its practical performance. The main difference with the CG method is that, instead of … Read more