Naive constant rank-type constraint qualifications for multifold second-order cone programming and semidefinite programming

The constant rank constraint qualification, introduced by Janin in 1984 for nonlinear programming, has been extensively used for sensitivity analysis, global convergence of first- and second-order algorithms, and for computing the derivative of the value function. In this paper we discuss naive extensions of constant rank-type constraint qualifications to second-order cone programming and semidefinite programming, … Read more

On complexity and convergence of high-order coordinate descent algorithms

Coordinate descent methods with high-order regularized models for box-constrained minimization are introduced. High-order stationarity asymptotic convergence and first-order stationarity worst-case evaluation complexity bounds are established. The computer work that is necessary for obtaining first-order $\varepsilon$-stationarity with respect to the variables of each coordinate-descent block is $O(\varepsilon^{-(p+1)/p})$ whereas the computer work for getting first-order $\varepsilon$-stationarity with … Read more

On scaled stopping criteria for a safeguarded augmented Lagrangian method with theoretical guarantees

This paper discusses the use of a stopping criterion based on the scaling of the Karush-Kuhn-Tucker (KKT) conditions by the norm of the approximate Lagrange multiplier in the ALGENCAN implementation of a safeguarded augmented Lagrangian method. Such stopping criterion is already used in several nonlinear programming solvers, but it has not yet been considered in … Read more

On the best achievable quality of limit points of augmented Lagrangian schemes

The optimization literature is vast in papers dealing with improvements on the global convergence of augmented Lagrangian schemes. Usually, the results are based on weak constraint qualifications, or, more recently, on sequential optimality conditions obtained via penalization techniques. In this paper we propose a somewhat different approach, in the sense that the algorithm itself is … Read more

On the use of Jordan Algebras for improving global convergence of an Augmented Lagrangian method in nonlinear semidefinite programming

Jordan Algebras are an important tool for dealing with semidefinite programming and optimization over symmetric cones in general. In this paper, a judicious use of Jordan Algebras in the context of sequential optimality conditions is done in order to generalize the global convergence theory of an Augmented Lagrangian method for nonlinear semidefinite programming. An approximate … Read more

On the convergence of augmented Lagrangian strategies for nonlinear programming

Augmented Lagrangian algorithms are very popular and successful methods for solving constrained optimization problems. Recently, the global convergence analysis of these methods have been dramatically improved by using the notion of the sequential optimality conditions. Such conditions are optimality conditions independently of the fulfilment of any constraint qualifications and provide theoretical tools to justify stopping … Read more

On optimality conditions for nonlinear conic programming

Sequential optimality conditions have played a major role in proving stronger global convergence results of numerical algorithms for nonlinear programming. Several extensions have been described in conic contexts, where many open questions have arisen. In this paper, we present new sequential optimality conditions in the context of a general nonlinear conic framework, which explains and … Read more

Properties of the delayed weighted gradient method

The delayed weighted gradient method, recently introduced in [13], is a low-cost gradient-type method that exhibits a surprisingly and perhaps unexpected fast convergence behavior that competes favorably with the well-known conjugate gradient method for the minimization of convex quadratic functions. In this work, we establish several orthogonality properties that add understanding to the practical behavior … Read more

Optimality conditions for nonlinear second-order cone programming and symmetric cone programming

Nonlinear symmetric cone programming (NSCP) generalizes important optimization problems such as nonlinear programming, nonlinear semidefinite programming and nonlinear second-order cone programming (NSOCP). In this work, we present two new optimality conditions for NSCP without constraint qualifications, which implies the Karush-Kuhn-Tucker conditions under a condition weaker than Robinson’s constraint qualification. In addition, we show the relationship … Read more

New sequential optimality conditions for mathematical problems with complementarity constraints and algorithmic consequences

In recent years, the theoretical convergence of iterative methods for solving nonlinear constrained optimization problems has been addressed using sequential optimality conditions, which are satisfied by minimizers independently of constraint qualifications (CQs). Even though there is a considerable literature devoted to sequential conditions for standard nonlinear optimization, the same is not true for Mathematical Problems … Read more