Globally Convergent Evolution Strategies for Constrained Optimization.

In this work we propose, analyze, and test algorithms for linearly constrained optimization when no use of derivatives of the objective function is made. The proposed methodology is built upon the globally convergent evolution strategies previously introduced by the authors for unconstrained optimization. Two approaches are encompassed to handle the constraints. In a first approach, … Read more

An Augmented Lagrangian based Algorithm for Distributed Non-Convex Optimization

This paper is about distributed derivative-based algorithms for solving optimization problems with a separable (potentially nonconvex) objective function and coupled affine constraints. A parallelizable method is proposed that combines ideas from the fields of sequential quadratic programming and augmented Lagrangian algorithms. The method negotiates shared dual variables that may be interpreted as prices, a concept … Read more

A Globally Convergent Stabilized SQP Method: Superlinear Convergence

Regularized and stabilized sequential quadratic programming (SQP) methods are two classes of methods designed to resolve the numerical and theoretical difficulties associated with ill-posed or degenerate nonlinear optimization problems. Recently, a regularized SQP method has been proposed that allows convergence to points satisfying certain second-order KKT conditions (SIAM J. Optim., 23(4):1983–2010, 2013). The method is … Read more

Local Convergence of an Algorithm for Subspace Identification from Partial Data

GROUSE (Grassmannian Rank-One Update Subspace Estimation) is an iterative algorithm for identifying a linear subspace of $\R^n$ from data consisting of partial observations of random vectors from that subspace. This paper examines local convergence properties of GROUSE, under assumptions on the randomness of the observed vectors, the randomness of the subset of elements observed at … Read more

Projection Methods: An Annotated Bibliography of Books and Reviews

Projections onto sets are used in a wide variety of methods in optimization theory but not every method that uses projections really belongs to the class of projection methods as we mean it here. Here projection methods are iterative algorithms that use projections onto sets while relying on the general principle that when a family … Read more

An Efficient Gauss-Newton Algorithm for Symmetric Low-Rank Product Matrix Approximations

We derive and study a Gauss-Newton method for computing a symmetric low-rank product that is the closest to a given symmetric matrix in Frobenius norm. Our Gauss-Newton method, which has a particularly simple form, shares the same order of iteration-complexity as a gradient method when the size of desired eigenspace is small, but can be … Read more

Strict Fejér Monotonicity by Superiorization of Feasibility-Seeking Projection Methods

We consider the superiorization methodology, which can be thought of as lying between feasibility-seeking and constrained minimization. It is not quite trying to solve the full fledged constrained minimization problem; rather, the task is to find a feasible point which is superior (with respect to the objective function value) to one returned by a feasibility-seeking … Read more

Linear equalities in blackbox optimization

The Mesh Adaptive Direct Search (Mads) algorithm is designed for blackbox optimization problems subject to general inequality constraints. Currently, Mads does not support equalities, neither in theory nor in practice. The present work proposes extensions to treat problems with linear equalities whose expression is known. The main idea consists in reformulating the optimization problem into … Read more

Zero-Convex Functions, Perturbation Resilience, and Subgradient Projections for Feasibility-Seeking Methods

The convex feasibility problem (CFP) is at the core of the modeling of many problems in various areas of science. Subgradient projection methods are important tools for solving the CFP because they enable the use of subgradient calculations instead of orthogonal projections onto the individual sets of the problem. Working in a real Hilbert space, … Read more

A DC (Difference of Convex functions) approach of the MPECs

This article deals with a study of the MPEC problem based on a reformulation to a DC problem (Difference of Convex functions). This reformulation is obtained by a partial penalization of the constraints. In this article we prove that a classical optimality condition for a DC program, if a constraint qualification is satisfied for MPEC, … Read more