A Computational Study of a Gradient-Based Log-Barrier Algorithm for a Class of Large-Scale SDPs

The authors of this paper recently introduced a transformation \cite{BuMoZh99-1} that converts a class of semidefinite programs (SDPs) into nonlinear optimization problems free of matrix-valued constraints and variables. This transformation enables the application of nonlinear optimization techniques to the solution of certain SDPs that are too large for conventional interior-point methods to handle efficiently. Based … Read more

A Pattern Search Filter Method for Nonlinear Programming without Derivatives

This paper presents and analyzes a pattern search method for general constrained optimization based on filter methods for step acceptance. Roughly, a filter method accepts a step that either improves the objective function value or the value of some function that measures the constraint violation. The new algorithm does not compute or approximate any derivatives, … Read more

A Nonlinear Programming Algorithm for Solving Semidefinite Programs via Low-rank Factorization

In this paper, we present a nonlinear programming algorithm for solving semidefinite programs (SDPs) in standard form. The algorithm’s distinguishing feature is a change of variables that replaces the symmetric, positive semidefinite variable X of the SDP with a rectangular variable R according to the factorization X = RR^T. The rank of the factorization, i.e., … Read more

Feasible Interior Methods Using Slacks for Nonlinear Optimization

A slack-based feasible interior point method is described which can be derived as a modification of infeasible methods. The modification is minor for most line search methods, but trust region methods require special attention. It is shown how the Cauchy point, which is often computed in trust region methods, must be modified so that the … Read more

On the global convergence of an SLP-filter algorithm

A mechanism for proving global convergence infilter-type methods for nonlinear programming is described. Such methods are characterized by their use of the dominance concept of multi objective optimization, instead of a penalty parameter whose adjustment can be problematic. The main point of interest is to demonstrate how convergence for NLP can be induced without forcing … Read more