Interior-Point Methods for Nonconvex Nonlinear Programming: Regularization and Warmstarts

In this paper, we investigate the use of an exact primal-dual penalty approach within the framework of an interior-point method for nonconvex nonlinear programming. This approach provides regularization and relaxation, which can aid in solving ill-behaved problems and in warmstarting the algorithm. We present details of our implementation within the LOQO algorithm and provide extensive … Read more

On warm starts for interior methods

An appealing feature of interior methods for linear programming is that the number of iterations required to solve a problem tends to be relatively insensitive to the choice of initial point. This feature has the drawback that it is difficult to design interior methods that efficiently utilize information from an optimal solution to a “nearby” … Read more

Dynamic updates of the barrier parameter in primal-dual methods for nonlinear programming

We introduce a framework in which updating rules for the barrier parameter in primal-dual interior-point methods become dynamic. The original primal-dual system is augmented to incorporate explicitly an updating function. A Newton step for the augmented system gives a primal-dual Newton step and also a step in the barrier parameter. Based on local information and … Read more

A Note on Multiobjective Optimization and Complementarity Constraints

We propose a new approach to convex nonlinear multiobjective optimization that captures the geometry of the Pareto set by generating a discrete set of Pareto points optimally. We show that the problem of finding an optimal representation of the Pareto surface can be formulated as a mathematical program with complementarity constraints. The complementarity constraints arise … Read more

A New Low Rank Quasi-Newton Update Scheme for Nonlinear Programming

A new quasi-Newton scheme for updating a low rank positive semi-definite Hessian approximation is described, primarily for use in sequential quadratic programming methods for nonlinear programming. Where possible the symmetric rank one update formula is used, but when this is not possible a new rank two update is used, which is not in the Broyden … Read more

GRASP for nonlinear optimization

We propose a Greedy Randomized Adaptive Search Procedure (GRASP) for solving continuous global optimization problems subject to box constraints. The method was tested on benchmark functions and the computational results show that our approach was able to find, in a few seconds, optimal solutions for all tested functions despite not using any gradient information about … Read more

A General Robust-Optimization Formulation for Nonlinear Programming

Most research in robust optimization has so far been focused on inequality-only, convex conic programming with simple linear models for uncertain parameters. Many practical optimization problems, however, are nonlinear and non-convex. Even in linear programming, coefficients may still be nonlinear functions of uncertain parameters. In this paper, we propose robust formulations that extend the robust-optimization … Read more

Interior-Point l_2 Penalty Methods for Nonlinear Programming with Strong Global Convergence Properties

We propose two line search primal-dual interior-point methods that approximately solve a equence of equality constrained barrier subproblems. To solve each subproblem, our methods apply a modified Newton method and use an $\ell_2$-exact penalty function to attain feasibility. Our methods have strong global convergence properties under standard assumptions. Specifically, if the penalty parameter remains bounded, … Read more

Solving Multi-Leader-Follower Games

Multi-leader-follower games arise when modeling competition between two or more dominant firms and lead in a natural way to equilibrium problems with equilibrium constraints (EPECs). We examine a variety of nonlinear optimization and nonlinear complementarity formulations of EPECs. We distinguish two broad cases: problems where the leaders can cost-differentiate and problems with price-consistent followers. We … Read more

Elastic-Mode Algorithms for Mathematical Programs with Equilibrium Constraints: Global Convergence and Stationarity Properties

The elastic-mode formulation of the problem of minimizing a nonlinear function subject to equilibrium constraints has appealing local properties in that, for a finite value of the penalty parameter, local solutions satisfying first- and second-order necessary optimality conditions for the original problem are also first- and second-order points of the elastic-mode formulation. Here we study … Read more