A Filter Active-Set Trust-Region Method

We develop a new active-set method for nonlinear programming problems that solves a regularized linear program to predict the active set and then fixes the active constraints to solve an equality-constrained quadratic program for fast convergence. Global convergence is promoted through the use of a filter. We show that the regularization parameter fulfills the same … Read more

In pursuit of a root

The basis pursuit technique is used to find a minimum one-norm solution of an underdetermined least-squares problem. Basis pursuit denoise fits the least-squares problem only approximately, and a single parameter determines a curve that traces the trade-off between the least-squares fit and the one-norm of the solution. We show that the function that describes this … Read more

Exact regularization of convex programs

The regularization of a convex program is exact if all solutions of the regularized problem are also solutions of the original problem for all values of the regularization parameter below some positive threshold. For a general convex program, we show that the regularization is exact if and only if a certain selection problem has a … Read more

Computing nonnegative tensor factorizations

Nonnegative tensor factorization (NTF) is a technique for computing a parts-based representation of high-dimensional data. NTF excels at exposing latent structures in datasets, and at finding good low-rank approximations to the data. We describe an approach for computing the NTF of a dataset that relies only on iterative linear-algebra techniques and that is comparable in … Read more

Global and finite termination of a two-phase augmented Lagrangian filter method for general quadratic programs

We present a two-phase algorithm for solving large-scale quadratic programs (QPs). In the first phase, gradient-projection iterations approximately minimize an augmented Lagrangian function and provide an estimate of the optimal active set. In the second phase, an equality-constrained QP defined by the current inactive variables is approximately minimized in order to generate a second-order search … Read more

Exact regularization of linear programs

We show that linear programs (LPs) admit regularizations that either contract the original (primal) solution set or leave it unchanged. Any regularization function that is convex and has compact level sets is allowed–differentiability is not required. This is an extension of the result first described by Mangasarian and Meyer (SIAM J. Control Optim., 17(6), pp. … Read more

An interior-point method for MPECs based on strictly feasible relaxations

An interior-point method for solving mathematical programs with equilibrium constraints (MPECs) is proposed. At each iteration of the algorithm, a single primal-dual step is computed from each subproblem of a sequence. Each subproblem is defined as a relaxation of the MPEC with a nonempty strictly feasible region. In contrast to previous approaches, the proposed relaxation … Read more

A globally convergent linearly constrained Lagrangian method for nonlinear optimization

For optimization problems with nonlinear constraints, linearly constrained Lagrangian (LCL) methods solve a sequence of subproblems of the form “minimize an augmented Lagrangian function subject to linearized constraints”. Such methods converge rapidly near a solution but may not be reliable from arbitrary starting points. The well known software package MINOS has proven effective on many … Read more