Relations between divergence of multipliers and convergence to infeasible points in primal-dual interior methods for nonconvex nonlinear programming

Recently, infeasibility issues in interior methods for nonconvex nonlinear programming have been studied. In particular, it has been shown how many line-search interior methods may converge to an infeasible point which is on the boundary of the feasible region with respect to the inequality constraints. The convergence is such that the search direction does not … Read more

Reduntant axioms in the definitionof Bregman functions

The definition of a Bregman function, given by Censor and Lent in 1981 on the basis of Bregman’s seminal 1967 paper, was subsequently used in a plethora of research works as a tool for building sequential and inherently parallel feasibility and optimization algorithms. Solodov and Svaiter have recently shown that it is not Citation Journal … Read more

Block-Iterative Algorithms with Underrelaxed Bregman Projections

The notion of relaxation is well understood for orthogonal projections onto convex sets. For general Bregman projections it was considered only for hyperplanes and the question of how to relax Bregman projections onto convex sets that are not linear (i.e., not hyperplanes or half-spaces) has remained open. A definition of underrelaxation of Bregman projections onto … Read more

Recovery of the Analytic Center in Perturbed Quadratic Regions and Applications

We present results to recover an approximate analytic center when a sectional convex quadratic set is perturbed by a finite number of new quadratic inequalities. This kind of restarting may play an important role in some interior-point algorithms that successively refine the region where is the solution of the original problem. Citation Technical Repor ES … Read more

A Robust Primal-Dual Interior-Point Algorithm for Nonlinear Programs

We present a primal-dual interior-point algorithm of line-search type for nonlinear programs, which uses a new decomposition scheme of sequential quadratic programming. The algorithm can circumvent the convergence difficulties of some existing interior-point methods. Global convergence properties are derived without assuming regularity conditions. The penalty parameter rho in the merit function is updated automatically such … Read more

NLPQLP: A New Fortran Implementation of a Sequential Quadratic Programming Algorithm

The Fortran subroutine NLPQLP solves smooth nonlinear programming problems and is an extension of the code NLPQL. The new version is specifically tuned to run under distributed systems. A new input parameter l is introduced for the number of parallel machines, that is the number of function calls to be executed simultaneously. In case of … Read more

A truncated SQP algorithm for solving nonconvex equality constrained optimization problems

An algorithm for solving equality constrained optimization problems is proposed. It can deal with nonconvex functions and uses a truncated conjugate algorithm for detecting nonconvexity. The algorithm ensures convergence from remote starting point by using line-search. Numerical experiments are reported, comparing the approach with the one implemented in the trust region codes ETR and Knitro. … Read more

A Comparative Study of Large-Scale Nonlinear Optimization Algorithms

In recent years, much work has been done on implementing a variety of algorithms in nonlinear programming software. In this paper, we analyze the performance of several state-of-the-art optimization codes on large-scale nonlinear optimization problems. Extensive numerical results are presented on different classes of problems, and features of each code that make it efficient or … Read more

New Versions of Interior Point Methods Applied to the Optimal Power Flow Problem

Interior Point methods for Nonlinear Programming have been extensively used to solve the Optimal Power Flow problem. These optimization algorithms require the solution of a set of nonlinear equations to obtain the optimal solution of the power network equations. During the iterative process to solve these equations, the search for the optimum is based on … Read more

A globally convergent filter method for nonlinear programming

In this paper we present a filter algorithm for nonlinear programming and prove its global convergence to stationary points. Each iteration is composed of a restoration phase, which reduces a measure of infeasibility, and an optimality phase, which reduces the objective function in a tangential approximation of the feasible set. These two phases are totally … Read more