A globally convergent filter method for nonlinear programming

In this paper we present a filter algorithm for nonlinear programming and prove its global convergence to stationary points. Each iteration is composed of a restoration phase, which reduces a measure of infeasibility, and an optimality phase, which reduces the objective function in a tangential approximation of the feasible set. These two phases are totally … Read more

Solving a Class of Semidefinite Programs via Nonlinear Programming

In this paper, we introduce a transformation that converts a class of linear and nonlinear semidefinite programming (SDP) problems into nonlinear optimization problems. For those problems of interest, the transformation replaces matrix-valued constraints by vector-valued ones, hence reducing the number of constraints by an order of magnitude. The class of transformable problems includes instances of … Read more

Global and Local Convergence of Line Search Filter Methods for Nonlinear Programming

Line search methods for nonlinear programming using Fletcher and Leyffer’s filter method, which replaces the traditional merit function, are proposed and their global and local convergence properties are analyzed. Previous theoretical work on filter methods has considered trust region algorithms and only the question of global convergence. The presented framework is applied to barrier interior … Read more

Constrained Nonlinear Programming for Volatility Estimation with GARCH Models

The paper proposes a constrained Nonlinear Programming methodology for volatility estimation with GARCH models. These models are usually developed and solved as unconstrained optimization problems whereas they actually fit into nonlinear, nonconvex problems. Computational results on FTSE 100 and S & P 500 indices with up to 1500 data points are given and contrasted to … Read more

Componentwise fast convergence in the solution of full-rank systems of nonlinear equations

The asymptotic convergence of parameterized variants of Newton’s method for the solution of nonlinear systems of equations is considered. The original system is perturbed by a term involving the variables and a scalar parameter which is driven to zero as the iteration proceeds. The exact local solutions to the perturbed systems then form a differentiable … Read more

Properties of the Log-Barrier Function on Degenerate Nonlinear Programs

We examine the sequence of local minimizers of the log-barrier function for a nonlinear program near a solution at which second-order sufficient conditions and the Mangasarian-Fromovitz constraint qualifications are satisfied, but the active constraint gradients are not necessarily linearly independent. When a strict complementarity condition is satisfied, we show uniqueness of the local minimizer of … Read more

A Pattern Search Filter Method for Nonlinear Programming without Derivatives

This paper presents and analyzes a pattern search method for general constrained optimization based on filter methods for step acceptance. Roughly, a filter method accepts a step that either improves the objective function value or the value of some function that measures the constraint violation. The new algorithm does not compute or approximate any derivatives, … Read more

Assessing the Potential of Interior Methods for Nonlinear Optimization

A series of numerical experiments with interior point (LOQO, KNITRO) and active-set SQP codes (SNOPT, filterSQP) are reported and analyzed. The tests were performed with small, medium-size and moderately large problems, and are examined by problem classes. Detailed observations on the performance of the codes, and several suggestions on how to improve them are presented. … Read more

Feasible Interior Methods Using Slacks for Nonlinear Optimization

A slack-based feasible interior point method is described which can be derived as a modification of infeasible methods. The modification is minor for most line search methods, but trust region methods require special attention. It is shown how the Cauchy point, which is often computed in trust region methods, must be modified so that the … Read more

A BFGS-IP algorithm for solving strongly convex optimization problems with feasibility enforced by an exact penalty approach

This paper introduces and analyses a new algorithm for minimizing a convex function subject to a finite number of convex inequality constraints. It is assumed that the Lagrangian of the problem is strongly convex. The algorithm combines interior point methods for dealing with the inequality constraints and quasi-Newton techniques for accelerating the convergence. Feasibility of … Read more