A Dual Gradient-Projection Method for Large-Scale Strictly Convex Quadratic Problems

The details of a solver for minimizing a strictly convex quadratic objective function subject to general linear constraints is presented. The method uses a gradient projection algorithm enhanced with subspace acceleration to solve the bound-constrained dual optimization problem. Such gradient projection methods are well-known, but are typically employed to solve the primal problem when only … Read more

A Flexible ADMM Algorithm for Big Data Applications

We present a flexible Alternating Direction Method of Multipliers (F-ADMM) algorithm for solving optimization problems involving a strongly convex objective function that is separable into $n \geq 2$ blocks, subject to (non-separable) linear equality constraints. The F-ADMM algorithm uses a \emph{Gauss-Seidel} scheme to update blocks of variables, and a regularization term is added to each … Read more

A Filter SQP Method: Local Convergence and Numerical Results

The work by Gould, Loh, and Robinson [“A filter method with unified step computation for nonlinear optimization”, SIAM J. Optim., 24 (2014), pp. 175–209] established global convergence of a new filter line search method for finding local first-order solutions to nonlinear and nonconvex constrained optimization problems. A key contribution of that work was that the … Read more

A Trust Region Algorithm with a Worst-Case Iteration Complexity of ${\cal O}(\epsilon^{-3/2})$ for Nonconvex Optimization

We propose a trust region algorithm for solving nonconvex smooth optimization problems. For any $\bar\epsilon \in (0,\infty)$, the algorithm requires at most $\mathcal{O}(\epsilon^{-3/2})$ iterations, function evaluations, and derivative evaluations to drive the norm of the gradient of the objective function below any $\epsilon \in (0,\bar\epsilon]$. This improves upon the $\mathcal{O}(\epsilon^{-2})$ bound known to hold for … Read more

Adaptive Augmented Lagrangian Methods: Algorithms and Practical Numerical Experience

In this paper, we consider augmented Lagrangian (AL) algorithms for solving large-scale nonlinear optimization problems that execute adaptive strategies for updating the penalty parameter. Our work is motivated by the recently proposed adaptive AL trust region method by Curtis et al. [An adaptive augmented Lagrangian method for large-scale constrained optimization, Math. Program. 152 (2015), pp.201–245.]. … Read more

A Globally Convergent Stabilized SQP Method: Superlinear Convergence

Regularized and stabilized sequential quadratic programming (SQP) methods are two classes of methods designed to resolve the numerical and theoretical difficulties associated with ill-posed or degenerate nonlinear optimization problems. Recently, a regularized SQP method has been proposed that allows convergence to points satisfying certain second-order KKT conditions (SIAM J. Optim., 23(4):1983–2010, 2013). The method is … Read more

An Interior-Point Trust-Funnel Algorithm for Nonlinear Optimization

We present an interior-point trust-funnel algorithm for solving large-scale nonlinear optimization problems. The method is based on an approach proposed by Gould and Toint (Math Prog 122(1):155–196, 2010) that focused on solving equality constrained problems. Our method is similar in that it achieves global convergence guarantees by combining a trust-region methodology with a funnel mechanism, … Read more

A Regularized SQP Method with Convergence to Second-Order Optimal Points

Regularized and stabilized sequential quadratic programming methods are two classes of sequential quadratic programming (SQP) methods designed to resolve the numerical and theoretical difficulties associated with ill-posed or degenerate nonlinear optimization problems. Recently, a regularized SQP method has been proposed that provides a strong connection between augmented Lagrangian methods and stabilized SQP methods. The method … Read more

A filter method with unified step computation for nonlinear optimization

We present a filter linesearch method for solving general nonlinear and nonconvex optimization problems. The method is of the filter variety, but uses a robust (always feasible) subproblem based on an exact penalty function to compute a search direction. This contrasts traditional filter methods that use a (separate) restoration phase designed to reduce infeasibility until … Read more

An Inexact Sequential Quadratic Optimization Algorithm for Nonlinear Optimization

We propose a sequential quadratic optimization method for solving nonlinear optimization problems with equality and inequality constraints. The novel feature of the algorithm is that, during each iteration, the primal-dual search direction is allowed to be an inexact solution of a given quadratic optimization subproblem. We present a set of generic, loose conditions that the … Read more