A second derivative SQP method: local convergence

Gould and Robinson (NAR 08/18, Oxford University Computing Laboratory, 2008) gave global convergence results for a second-derivative SQP method for minimizing the exact $\ell_1$-merit function for a \emph{fixed} value of the penalty parameter. To establish this result, we used the properties of the so-called Cauchy step, which was itself computed from the so-called predictor step. In addition, we allowed for the computation of a variety of (optional) SQP steps that were intended to improve the efficiency of the algorithm. Although we established global convergence of the algorithm, we did not discuss certain aspects that are critical when developing software capable of solving general optimization problems. In particular, we must have strategies for updating the penalty parameter and better techniques for defining the positive-definite matrix $B_k$ used in computing the predictor step. In this paper we address both of these issues. We consider two techniques for defining the positive-definite matrix $B_k$—a simple diagonal approximation and a more sophisticated \emph{limited}-memory BFGS update. We also analyze a strategy for updating the penalty parameter based on \emph{approximately} minimizing the $\ell_1$-penalty function over a sequence of increasing values of the penalty parameter. Algorithms based on exact penalty functions have certain desirable properties. To be practical, however, these algorithms must be guaranteed to avoid the so-called Maratos effect. We show that a \emph{nonmonotone} variant of our algorithm avoids this phenomenon and, therefore, results in asymptotically superlinear local convergence; this is verified by preliminary numerical results on the Hock and Shittkowski test set.

Citation

Numerical Analysis Report 08/21, Oxford University Computing Laboratory.

Article

Download

View PDF