Convergence of Quasi-Newton Methods for Solving Constrained Generalized Equations

In this paper, we focus on quasi-Newton methods to solve constrained generalized equations. As is well-known, this problem was firstly studied by Robinson and Josephy in the 70’s. Since then, it has been extensively studied by many other researchers, specially Dontchev and Rockafellar. Here, we propose two Broyden-type quasi-Newton approaches to dealing with constrained generalized … Read more

Weak notions of nondegeneracy in nonlinear semidefinite programming

The constraint nondegeneracy condition is one of the most relevant and useful constraint qualifications in nonlinear semidefinite programming. It can be characterized in terms of any fixed orthonormal basis of the, let us say, $\ell$-dimensional kernel of the constraint matrix, by the linear independence of a set of $\ell(\ell+1)/2$ derivative vectors. We show that this … Read more

On complexity and convergence of high-order coordinate descent algorithms

Coordinate descent methods with high-order regularized models for box-constrained minimization are introduced. High-order stationarity asymptotic convergence and first-order stationarity worst-case evaluation complexity bounds are established. The computer work that is necessary for obtaining first-order $\varepsilon$-stationarity with respect to the variables of each coordinate-descent block is $O(\varepsilon^{-(p+1)/p})$ whereas the computer work for getting first-order $\varepsilon$-stationarity with … Read more

A family of optimal weighted conjugate-gradient-type methods for strictly convex quadratic minimization

We introduce a family of weighted conjugate-gradient-type methods, for strictly convex quadratic functions, whose parameters are determined by a minimization model based on a convex combination of the objective function and its gradient norm. This family includes the classical linear conjugate gradient method and the recently published delayed weighted gradient method as the extreme cases … Read more

Using first-order information in Direct Multisearch for multiobjective optimization

Derivatives are an important tool for single-objective optimization. In fact, it is commonly accepted that derivative-based methods present a better performance than derivative-free optimization approaches. In this work, we will show that the same does not apply to multiobjective derivative-based optimization, when the goal is to compute an approximation to the complete Pareto front of … Read more

Naive constant rank-type constraint qualifications for multifold second-order cone programming and semidefinite programming

The constant rank constraint qualification, introduced by Janin in 1984 for nonlinear programming, has been extensively used for sensitivity analysis, global convergence of first- and second-order algorithms, and for computing the derivative of the value function. In this paper we discuss naive extensions of constant rank-type constraint qualifications to second-order cone programming and semidefinite programming, … Read more

On complexity and convergence of high-order coordinate descent algorithms

Coordinate descent methods with high-order regularized models for box-constrained minimization are introduced. High-order stationarity asymptotic convergence and first-order stationarity worst-case evaluation complexity bounds are established. The computer work that is necessary for obtaining first-order $\varepsilon$-stationarity with respect to the variables of each coordinate-descent block is $O(\varepsilon^{-(p+1)/p})$ whereas the computer work for getting first-order $\varepsilon$-stationarity with … Read more

On scaled stopping criteria for a safeguarded augmented Lagrangian method with theoretical guarantees

This paper discusses the use of a stopping criterion based on the scaling of the Karush-Kuhn-Tucker (KKT) conditions by the norm of the approximate Lagrange multiplier in the ALGENCAN implementation of a safeguarded augmented Lagrangian method. Such stopping criterion is already used in several nonlinear programming solvers, but it has not yet been considered in … Read more

On the best achievable quality of limit points of augmented Lagrangian schemes

The optimization literature is vast in papers dealing with improvements on the global convergence of augmented Lagrangian schemes. Usually, the results are based on weak constraint qualifications, or, more recently, on sequential optimality conditions obtained via penalization techniques. In this paper we propose a somewhat different approach, in the sense that the algorithm itself is … Read more

On the use of Jordan Algebras for improving global convergence of an Augmented Lagrangian method in nonlinear semidefinite programming

Jordan Algebras are an important tool for dealing with semidefinite programming and optimization over symmetric cones in general. In this paper, a judicious use of Jordan Algebras in the context of sequential optimality conditions is done in order to generalize the global convergence theory of an Augmented Lagrangian method for nonlinear semidefinite programming. An approximate … Read more