On an Approximation of the Hessian of the Lagrangian

In the context of SQP methods or, more recently, of sequential semidefinite programming methods, it is common practice to construct a positive semidefinite approximation of the Hessian of the Lagrangian. The Hessian of the augmented Lagrangian is a suitable approximation as it maintains local superlinear convergence under appropriate assumptions. In this note we give a … Read more

Some Properties of Regularization and Penalization Schemes for MPECs

Some properties of regularized and penalized nonlinear programming formulations of mathematical programs with equilibrium constraints (MPECs) are described. The focus is on the properties of these formulations near a local solution of the MPEC at which strong stationarity and a second-order sufficient condition are satisfied. In the regularized formulations, the complementarity condition is replaced by … Read more

An interior-point L1-penalty method for nonlinear optimization

A mixed interior/exterior-point method for nonlinear programming is described, that handles constraints by an L1-penalty function. A suitable decomposition of the penalty terms and embedding of the problem into a higher-dimensional setting leads to an equivalent, surprisingly regular, reformulation as a smooth penalty problem only involving inequality constraints. The resulting problem may then be tackled … Read more

Sums of Squares Relaxations of Polynomial Semidefinite Programs

A polynomial SDP (semidefinite program) minimizes a polynomial objective function over a feasible region described by a positive semidefinite constraint of a symmetric matrix whose components are multivariate polynomials. Sums of squares relaxations developed for polynomial optimization problems are extended to propose sums of squares relaxations for polynomial SDPs with an additional constraint for the … Read more

A limited memory algorithm for inequality constrained minimization

A method for solving inequality constrained minimization problems is described. The algorithm is based on a primal-dual interior point approach, with a line search globalization strategy. A quasi-Newton technique (BFGS) with limited memory storage is used to approximate the second derivatives of the functions. The method is especially intended for solving problems with a large … Read more

Interior-Point Algorithms, Penalty Methods and Equilibrium Problems

In this paper we consider the question of solving equilibrium problems—formulated as complementarity problems and, more generally, mathematical programs with equilibrium constraints (MPEC’s)—as nonlinear programs, using an interior-point approach. These problems pose theoretical difficulties for nonlinear solvers, including interior-point methods. We examine the use of penalty methods to get around these difficulties, present an example … Read more

An Algorithm for Degenerate Nonlinear Programming with Rapid Local Convergence

The paper describes and analyzes an algorithmic framework for solving nonlinear programming problems in which strict complementarity conditions and constraint qualifications are not necessarily satisfied at a solution. The framework is constructed from three main algorithmic ingredients. The first is any conventional method for nonlinear programming that produces estimates of the Lagrange multipliers at each … Read more

Convex- and Monotone- Transformable Mathematical Programming Problems and a Proximal-Like Point Method

The problem of finding singularities of monotone vectors fields on Hadamard manifolds will be considered and solved by extending the well-known proximal point algorithm. For monotone vector fields the algorithm will generate a well defined sequence, and for monotone vector fields with singularities it will converge to a singularity. It will be also shown how … Read more

KNITRO-Direct: A Hybrid Interior Algorithm for Nonlinear Optimization

A hybrid interior-point method for nonlinear programming is presented. It enjoys the flexibility of switching between a line search based method which computes steps by factoring the primal-dual equations and an iterative method using a conjugate gradient algorithm and globalized by means of trust regions. Steps computed by a direct factorization are always tried first, … Read more

Sharpening the Karush-John optimality conditions

A refined version of the Karush-John first order optimality conditions is presented which reduces the number of constraints for which a constraint qualification is needed. This version is a generalization both of the Karush-John conditions and of the first order optimality conditions for concave constraints. Article Download View Sharpening the Karush-John optimality conditions