Adaptive Augmented Lagrangian Methods: Algorithms and Practical Numerical Experience

In this paper, we consider augmented Lagrangian (AL) algorithms for solving large-scale nonlinear optimization problems that execute adaptive strategies for updating the penalty parameter. Our work is motivated by the recently proposed adaptive AL trust region method by Curtis et al. [An adaptive augmented Lagrangian method for large-scale constrained optimization, Math. Program. 152 (2015), pp.201–245.]. … Read more

On Second Order Optimality Conditions in Nonlinear Optimization

In this work we present new weak conditions that ensure the validity of necessary second order optimality conditions (SOC) for nonlinear optimization. We are able to prove that weak and strong SOCs hold for all Lagrange multipliers using Abadie-type assumptions. We also prove weak and strong SOCs for at least one Lagrange multiplier imposing the … Read more

A Lagrangean Decomposition Approach for Robust Combinatorial Optimization

We address robust versions of combinatorial optimization problems, specializing on the discrete scenario case and the uncorrelated ellipsoidal uncertainty case. We present a branch and bound-algorithm for the min-max variant of these problems which uses lower bounds obtained from Lagrangean decomposition, allowing to separate the uncertainty aspect in the objective function from the combinatorial structure … Read more

Globally Convergent Evolution Strategies for Constrained Optimization.

In this work we propose, analyze, and test algorithms for linearly constrained optimization when no use of derivatives of the objective function is made. The proposed methodology is built upon the globally convergent evolution strategies previously introduced by the authors for unconstrained optimization. Two approaches are encompassed to handle the constraints. In a first approach, … Read more

An Augmented Lagrangian based Algorithm for Distributed Non-Convex Optimization

This paper is about distributed derivative-based algorithms for solving optimization problems with a separable (potentially nonconvex) objective function and coupled affine constraints. A parallelizable method is proposed that combines ideas from the fields of sequential quadratic programming and augmented Lagrangian algorithms. The method negotiates shared dual variables that may be interpreted as prices, a concept … Read more

A Globally Convergent Stabilized SQP Method: Superlinear Convergence

Regularized and stabilized sequential quadratic programming (SQP) methods are two classes of methods designed to resolve the numerical and theoretical difficulties associated with ill-posed or degenerate nonlinear optimization problems. Recently, a regularized SQP method has been proposed that allows convergence to points satisfying certain second-order KKT conditions (SIAM J. Optim., 23(4):1983–2010, 2013). The method is … Read more

Local Convergence of an Algorithm for Subspace Identification from Partial Data

GROUSE (Grassmannian Rank-One Update Subspace Estimation) is an iterative algorithm for identifying a linear subspace of $\R^n$ from data consisting of partial observations of random vectors from that subspace. This paper examines local convergence properties of GROUSE, under assumptions on the randomness of the observed vectors, the randomness of the subset of elements observed at … Read more

Projection Methods: An Annotated Bibliography of Books and Reviews

Projections onto sets are used in a wide variety of methods in optimization theory but not every method that uses projections really belongs to the class of projection methods as we mean it here. Here projection methods are iterative algorithms that use projections onto sets while relying on the general principle that when a family … Read more

An Efficient Gauss-Newton Algorithm for Symmetric Low-Rank Product Matrix Approximations

We derive and study a Gauss-Newton method for computing a symmetric low-rank product that is the closest to a given symmetric matrix in Frobenius norm. Our Gauss-Newton method, which has a particularly simple form, shares the same order of iteration-complexity as a gradient method when the size of desired eigenspace is small, but can be … Read more