A Filter Active-Set Trust-Region Method

We develop a new active-set method for nonlinear programming problems that solves a regularized linear program to predict the active set and then fixes the active constraints to solve an equality-constrained quadratic program for fast convergence. Global convergence is promoted through the use of a filter. We show that the regularization parameter fulfills the same … Read more

Data Assimilation in Weather Forecasting: A Case Study in PDE-Constrained Optimization

Variational data assimilation is used at major weather prediction centers to produce the initial conditions for 7- to 10-day weather forecasts. This technique requires the solution of a very large data-fitting problem in which the major element is a set of partial differential equations that models the evolution of the atmosphere over a time window … Read more

Global convergence of slanting filter methods for nonlinear programming

In this paper we present a general algorithm for nonlinear programming which uses a slanting filter criterion for accepting the new iterates. Independently of how these iterates are computed, we prove that all accumulation points of the sequence generated by the algorithm are feasible. Computing the new iterates by the inexact restoration method, we prove … Read more

Global minimization using an Augmented Lagrangian method with variable lower-level constraints

A novel global optimization method based on an Augmented Lagrangian framework is introduced for continuous constrained nonlinear optimization problems. At each outer iteration the method requires the $\varepsilon$-global minimization of the Augmented Lagrangian with simple constraints. Global convergence to an $\varepsilon$-global minimizer of the original problem is proved. The subproblems are solved using the $\alpha$BB … Read more

Solving systems of nonlinear equations with continuous GRASP

A method for finding all roots of a system of nonlinear equations is described. Our method makes use of C-GRASP, a recently proposed continuous global optimization heuristic. Given a nonlinear system, we solve a corresponding adaptively modified global optimization problem multiple times, each time using C-GRASP, with areas of repulsion around roots that have already … Read more

A continuous GRASP to determine the relationship between drugs and adverse reactions

Adverse drug reactions (ADRs) are estimated to be one of the leading causes of death. Many national and international agencies have set up databases of ADR reports for the express purpose of determining the relationship between drugs and adverse reactions that they cause. We formulate the drug-reaction relationship problem as a continuous optimization problem and … Read more

Speeding up continuous GRASP

Continuous GRASP (C-GRASP) is a stochastic local search metaheuristic for finding cost-efficient solutions to continuous global optimization problems subject to box constraints (Hirsch et al., 2006). Like a greedy randomized adaptive search procedure (GRASP), a C-GRASP is a multi-start procedure where a starting solution for local improvement is constructed in a greedy randomized fashion. In … Read more

Asynchronous parallel generating set search for linearly-constrained optimization

Generating set search (GSS) is a family of direct search methods that encompasses generalized pattern search and related methods. We describe an algorithm for asynchronous linearly-constrained GSS, which has some complexities that make it different from both the asynchronous bound-constrained case as well as the synchronous linearly-constrained case. The algorithm has been implemented in the … Read more

Global optimization by continuous GRASP

We introduce a novel global optimization method called Continuous GRASP (C-GRASP) which extends Feo and Resende’s greedy randomized adaptive search procedure (GRASP) from the domain of discrete optimization to that of continuous global optimization. This stochastic local search method is simple to implement, is widely applicable, and does not make use of derivative information, thus … Read more

A local convergence property of primal-dual methods for nonlinear programming

We prove a new local convergence property of a primal-dual method for solving nonlinear optimization problem. Following a standard interior point approach, the complementarity conditions of the original primal-dual system are perturbed by a parameter which is driven to zero during the iterations. The sequence of iterates is generated by a linearization of the perturbed … Read more