A sequential optimality condition related to the quasinormality constraint qualification and its algorithmic consequences

In the present paper, we prove that the augmented Lagrangian method converges to KKT points under the quasinormality constraint qualification, which is associated with the external penalty theory. For this purpose, a new sequential optimality condition for smooth constrained optimization, called PAKKT, is defined. The new condition takes into account the sign of the dual … Read more

Properties of the block BFGS update and its application to the limited-memory block BNS method for unconstrained minimization.

A block version of the BFGS variable metric update formula and its modifications are investigated. In spite of the fact that this formula satisfies the quasi-Newton conditions with all used difference vectors and that the improvement of convergence is the best one in some sense for quadratic objective functions, for general functions it does not … Read more

First Order Methods Beyond Convexity and Lipschitz Gradient Continuity with Applications to Quadratic Inverse Problems

We focus on nonconvex and nonsmooth minimization problems with a composite objective, where the differentiable part of the objective is freed from the usual and restrictive global Lipschitz gradient continuity assumption. This longstanding smoothness restriction is pervasive in first order methods (FOM), and was recently circumvent for convex composite optimization by Bauschke, Bolte and Teboulle, … Read more

An extension of Yuan’s Lemma and its applications in optimization

We prove an extension of Yuan’s Lemma to more than two matrices, as long as the set of matrices has rank at most 2. This is used to generalize the main result of [A. Baccari and A. Trad. On the classical necessary second-order optimality conditions in the presence of equality and inequality constraints. SIAM J. … Read more

Some theoretical limitations of second-order algorithms for smooth constrained optimization

In second-order algorithms, we investigate the relevance of the constant rank of the full set of active constraints in ensuring global convergence to a second-order stationary point. We show that second-order stationarity is not expected in the non-constant rank case if the growth of the so-called tangent multipliers, associated with a second-order complementarity measure, is … Read more

Generalized Symmetric ADMM for Separable Convex Optimization

The Alternating Direction Method of Multipliers (ADMM) has been proved to be effective for solving separable convex optimization subject to linear constraints. In this paper, we propose a Generalized Symmetric ADMM (GS-ADMM), which updates the Lagrange multiplier twice with suitable stepsizes, to solve the multi-block separable convex programming. This GS-ADMM partitions the data into two … Read more

A second-order optimality condition with first- and second-order complementarity associated with global convergence of algorithms

We develop a new notion of second-order complementarity with respect to the tangent subspace related to second-order necessary optimality conditions by the introduction of so-called tangent multipliers. We prove that around a local minimizer, a second-order stationarity residual can be driven to zero while controlling the growth of Lagrange multipliers and tangent multipliers, which gives … Read more

A recursive semi-smooth Newton method for linear complementarity problems

A primal feasible active set method is presented for finding the unique solution of a Linear Complementarity Problem (LCP) with a P-matrix, which extends the globally convergent active set method for strictly convex quadratic problems with simple bounds proposed by [P. Hungerlaender and F. Rendl. A feasible active set method for strictly convex problems with … Read more

A New Trust Region Method with Simple Model for Large-Scale Optimization

In this paper a new trust region method with simple model for solving large-scale unconstrained nonlinear optimization problems is proposed. By using the generalized weak quasi-Newton equations, we derive several schemes to determine the appropriate scalar matrix as the Hessian approximation. Under some reasonable conditions and the framework of the trust-region method, the global convergence … Read more

Global convergence of a derivative-free inexact restoration filter algorithm for nonlinear programming

In this work we present an algorithm for solving constrained optimization problems that does not make explicit use of the objective function derivatives. The algorithm mixes an inexact restoration framework with filter techniques, where the forbidden regions can be given by the flat or slanting filter rule. Each iteration is decomposed in two independent phases: … Read more