Interior Methods for Mathematical Programs with Complementarity Constraints

This paper studies theoretical and practical properties of interior-penalty methods for mathematical programs with complementarity constraints. A framework for implementing these methods is presented, and the need for adaptive penalty update strategies is motivated with examples. The algorithm is shown to be globally convergent to strongly stationary points, under standard assumptions. These results are then … Read more

Convergence Analysis of an Interior-Point Method for Mathematical Programs with Equilibrium Constraints

We prove local and global convergence results for an interior-point method applied to mathematical programs with equilibrium constraints. The global result shows the algorithm minimizes infeasibility regardless of starting point, while one result proves local convergence when penalty functions are exact; another local result proves convergence when the solution is not even a KKT point. … Read more

Computational experience with an interior point algorithm for large scale contact problems

In this paper we present an interior point method for large scale Signorini elastic contact problems. We study the case of an elastic body in frictionless contact with a rigid foundation. Primal and primal-dual algorithms are developed to solve the quadratic optimization problem arising in the variational formulation. Our computational study confirms the efficiency of … Read more

Lagrange Multipliers with Optimal Sensitivity Properties

We consider optimization problems with inequality and abstract set constraints, and we derive sensitivity properties of Lagrange multipliers under very weak conditions. In particular, we do not assume uniqueness of a Lagrange multiplier or continuity of the perturbation function. We show that the Lagrange multiplier of minimum norm defines the optimal rate of improvement of … Read more

Sums of Squares and Semidefinite Programming Relaxations for Polynomial Optimization Problems with Structured Sparsity

Unconstrained and inequality constrained sparse polynomial optimization problems (POPs) are considered. A correlative sparsity pattern graph is defined to find a certain sparse structure in the objective and constraint polynomials of a POP. Based on this graph, sets of supports for sums of squares (SOS) polynomials that lead to efficient SOS and semidefinite programming (SDP) … Read more

Best approximation to common fixed points of a semigroup of nonexpansive operators

We study a sequential algorithm for finding the projection of a given point onto the common fixed points set of a semigroup of nonexpansive operators in Hilbert space. The convergence of such an algorithm was previously established only for finitely many nonexpansive operators. Algorithms of this kind have been applied to the best approximation and … Read more

On the Convergence of Successive Linear-Quadratic Programming Algorithms

The global convergence properties of a class of penalty methods for nonlinear programming are analyzed. These methods include successive linear programming approaches, and more specifically, the successive linear-quadratic programming approach presented by Byrd, Gould, Nocedal and Waltz (Math. Programming 100(1):27–48, 2004). Every iteration requires the solution of two trust-region subproblems involving piecewise linear and quadratic … Read more

The Q Method for Second-order Cone Programming

Based on the Q method for SDP, we develop the Q method for SOCP. A modified Q method is also introduced. Properties of the algorithms are discussed. Convergence proofs are given. Finally, we present numerical results. Citation AdvOl-Report#2004/15 McMaster University, Advanced Optimization Laboratory Article Download View The Q Method for Second-order Cone Programming

Newton-KKT Interior-Point Methods for Indefinite Quadratic Programming

Two interior-point algorithms are proposed and analyzed, for the (local) solution of (possibly) indefinite quadratic programming problems. They are of the Newton-KKT variety in that (much like in the case of primal-dual algorithms for linear programming) search directions for the `primal´ variables and the Karush-Kuhn-Tucker (KKT) multiplier estimates are components of the Newton (or quasi-Newton) … Read more

Performance of CONDOR, a Parallel, Constrained extension of Powell’s UOBYQA algorithm. Experimental results and comparison with the DFO algorithm.

This paper presents an algorithmic extension of Powell’s UOBYQA algorithm (”Unconstrained Optimization BY Quadratical Approximation”). We start by summarizing the original algorithm of Powell and by presenting it in a more comprehensible form. Thereafter, we report comparative numerical results between UOBYQA, DFO and a parallel, constrained extension of UOBYQA that will be called in the … Read more