We present the formulation and analysis of a new sequential quadratic programming (\SQP) method for general nonlinearly constrained optimization. The method pairs a primal-dual generalized augmented Lagrangian merit function with a \emph{flexible} line search to obtain a sequence of improving estimates of the solution. This function is a primal-dual variant of the augmented Lagrangian proposed by Hestenes and Powell in the early 1970s. A crucial feature of the method is that the \QP{} subproblems are convex, but formed from the exact second derivatives of the original problem. This is in contrast to methods that use a less accurate quasi-Newton approximation. Additional benefits of this approach include the following: (i) each \QP{} subproblem is regularized; (ii) the \QP{} subproblem always has a known feasible point; and (iii) a projected gradient method may be used to identify the \QP{} active set when far from the solution.
Citation
UCSD Technical Report NA-11-02