A Simple Primal-Dual Feasible Interior-Point Methodfor Nonlinear Programming with Monotone Descent

We propose and analyze a primal-dual interior point method of the “feasible” type, with the additional property that the objective function decreases at each iteration. A distinctive feature of the method is the use of different barrier parameter values for each constraint, with the purpose of better steering the constructed sequence away from non-KKT stationary … Read more

On the superlinear local convergence of a filter-SQP method

Transition to superlinear local convergence is shown for a modified version of the trust-region filter-SQP method for nonlinear programming introduced by Fletcher, Leyffer, and Toint [8]. Hereby, the original trust-region SQP-steps can be used without an additional second order correction. The main modification consists in using the Lagrangian function value instead of the objective function … Read more

A globally convergent primal-dual interior-point filter method for nonlinear programming

In this paper, the filter technique of Fletcher and Leyffer (1997) is used to globalize the primal-dual interior-point algorithm for nonlinear programming, avoiding the use of merit functions and the updating of penalty parameters. The new algorithm decomposes the primal-dual step obtained from the perturbed first-order necessary conditions into a normal and a tangential step, … Read more

Space mapping: Models, sensitivities, and trust-regions methods

The goal of this paper is to organize some of the mathematical and algorithmic aspects of the recently proposed space-mapping technique for continuous optimization with expensive function evaluations. First, we consider the mapping from the fine space to the coarse space when the models are vector-valued functions and when the space-mapping (nonlinear) least-squares residual is … Read more

A truncated SQP algorithm for solving nonconvex equality constrained optimization problems

An algorithm for solving equality constrained optimization problems is proposed. It can deal with nonconvex functions and uses a truncated conjugate algorithm for detecting nonconvexity. The algorithm ensures convergence from remote starting point by using line-search. Numerical experiments are reported, comparing the approach with the one implemented in the trust region codes ETR and Knitro. … Read more

A globally convergent filter method for nonlinear programming

In this paper we present a filter algorithm for nonlinear programming and prove its global convergence to stationary points. Each iteration is composed of a restoration phase, which reduces a measure of infeasibility, and an optimality phase, which reduces the objective function in a tangential approximation of the feasible set. These two phases are totally … Read more

Global and Local Convergence of Line Search Filter Methods for Nonlinear Programming

Line search methods for nonlinear programming using Fletcher and Leyffer’s filter method, which replaces the traditional merit function, are proposed and their global and local convergence properties are analyzed. Previous theoretical work on filter methods has considered trust region algorithms and only the question of global convergence. The presented framework is applied to barrier interior … Read more

On the Convergence of Newton Iterations to Non-Stationary Points

We study conditions under which line search Newton methods for nonlinear systems of equations and optimization fail due to the presence of singular non-stationary points. These points are not solutions of the problem and are characterized by the fact that Jacobian or Hessian matrices are singular. It is shown that, for systems of nonlinear equations, … Read more

On the global convergence of an SLP-filter algorithm

A mechanism for proving global convergence infilter-type methods for nonlinear programming is described. Such methods are characterized by their use of the dominance concept of multi objective optimization, instead of a penalty parameter whose adjustment can be problematic. The main point of interest is to demonstrate how convergence for NLP can be induced without forcing … Read more

Failure of Global Convergence for a Class of Interior Point Methods for Nonlinear Programming

Using a simple analytical example, we demonstrate that a class of interior point methods for general nonlinear programming, including some current methods, is not globally convergent. It is shown that those algorithms do produce limit points that are neither feasible nor stationary points of some measure of the constraint violation, when applied to a well-posed … Read more