Solving systems of nonlinear equations with continuous GRASP

A method for finding all roots of a system of nonlinear equations is described. Our method makes use of C-GRASP, a recently proposed continuous global optimization heuristic. Given a nonlinear system, we solve a corresponding adaptively modified global optimization problem multiple times, each time using C-GRASP, with areas of repulsion around roots that have already … Read more

A continuous GRASP to determine the relationship between drugs and adverse reactions

Adverse drug reactions (ADRs) are estimated to be one of the leading causes of death. Many national and international agencies have set up databases of ADR reports for the express purpose of determining the relationship between drugs and adverse reactions that they cause. We formulate the drug-reaction relationship problem as a continuous optimization problem and … Read more

Speeding up continuous GRASP

Continuous GRASP (C-GRASP) is a stochastic local search metaheuristic for finding cost-efficient solutions to continuous global optimization problems subject to box constraints (Hirsch et al., 2006). Like a greedy randomized adaptive search procedure (GRASP), a C-GRASP is a multi-start procedure where a starting solution for local improvement is constructed in a greedy randomized fashion. In … Read more

Asynchronous parallel generating set search for linearly-constrained optimization

Generating set search (GSS) is a family of direct search methods that encompasses generalized pattern search and related methods. We describe an algorithm for asynchronous linearly-constrained GSS, which has some complexities that make it different from both the asynchronous bound-constrained case as well as the synchronous linearly-constrained case. The algorithm has been implemented in the … Read more

Global optimization by continuous GRASP

We introduce a novel global optimization method called Continuous GRASP (C-GRASP) which extends Feo and Resende’s greedy randomized adaptive search procedure (GRASP) from the domain of discrete optimization to that of continuous global optimization. This stochastic local search method is simple to implement, is widely applicable, and does not make use of derivative information, thus … Read more

A local convergence property of primal-dual methods for nonlinear programming

We prove a new local convergence property of a primal-dual method for solving nonlinear optimization problem. Following a standard interior point approach, the complementarity conditions of the original primal-dual system are perturbed by a parameter which is driven to zero during the iterations. The sequence of iterates is generated by a linearization of the perturbed … Read more

Interior-Point Methods for Nonconvex Nonlinear Programming: Regularization and Warmstarts

In this paper, we investigate the use of an exact primal-dual penalty approach within the framework of an interior-point method for nonconvex nonlinear programming. This approach provides regularization and relaxation, which can aid in solving ill-behaved problems and in warmstarting the algorithm. We present details of our implementation within the LOQO algorithm and provide extensive … Read more

On warm starts for interior methods

An appealing feature of interior methods for linear programming is that the number of iterations required to solve a problem tends to be relatively insensitive to the choice of initial point. This feature has the drawback that it is difficult to design interior methods that efficiently utilize information from an optimal solution to a “nearby” … Read more

Dynamic updates of the barrier parameter in primal-dual methods for nonlinear programming

We introduce a framework in which updating rules for the barrier parameter in primal-dual interior-point methods become dynamic. The original primal-dual system is augmented to incorporate explicitly an updating function. A Newton step for the augmented system gives a primal-dual Newton step and also a step in the barrier parameter. Based on local information and … Read more

A Note on Multiobjective Optimization and Complementarity Constraints

We propose a new approach to convex nonlinear multiobjective optimization that captures the geometry of the Pareto set by generating a discrete set of Pareto points optimally. We show that the problem of finding an optimal representation of the Pareto surface can be formulated as a mathematical program with complementarity constraints. The complementarity constraints arise … Read more