Scalable Nonlinear Programming Via Exact Differentiable Penalty Functions and Trust-Region Newton Methods

We present an approach for nonlinear programming (NLP) based on the direct minimization of an exact di erentiable penalty function using trust-region Newton techniques. As opposed to existing algorithmic approaches to NLP, the approach provides all the features required for scalability: it can eciently detect and exploit directions of negative curvature, it is superlinearly convergent, and it enables the scalable computation of the Newton step through iterative linear algebra. Moreover, it presents features that are desirable for parametric optimization problems that have to be solved in a latency-limited environment, as is the case for model predictive control and mixed-integer nonlinear programming. These features are fast detection of activity, ecient warm-starting, and progress on a primal-dual merit function at every iteration. We derive general convergence results for our approach and demonstrate its behavior through numerical studies.

Article

Download

View PDF