A Python/C library for bound-constrained global optimization with continuous GRASP

This paper describes libcgrpp, a GNU-style dynamic shared Python/C library of the continuous greedy randomized adaptive search procedure (C-GRASP) for bound constrained global optimization. C-GRASP is an extension of the GRASP metaheuristic (Feo and Resende, 1989). After a brief introduction to C-GRASP, we show how to download, install, configure, and use the library through an … Read more

An Implementation of an Algorithm for Nonlinear Programming Based on Piecewise Linear Models

This is a progress report on an implementation of the active-set method for nonlinear programming proposed in [6] that employs piecewise linear models in the active-set prediction phase. The motivation for this work is to develop an algorithm that is capable of solving large-scale problems, including those with a large reduced space. Unlike SQP methods, … Read more

On the evaluation complexity of composite function minimization with applications to nonconvex nonlinear programming

We estimate the worst-case complexity of minimizing an unconstrained, nonconvex composite objective with a structured nonsmooth term by means of some first-order methods. We find that it is unaffected by the nonsmoothness of the objective in that a first-order trust-region or quadratic regularization method applied to it takes at most O($\epsilon^{-2}$) function-evaluations to reduce the … Read more

SpeeDP: A new algorithm to compute the SDP relaxations of Max-Cut for very large graphs

We consider low-rank semidefinite programming (LRSDP) relaxations of unconstrained {-1,1} quadratic problems (or, equivalently, of Max-Cut problems) that can be formulated as the nonconvex nonlinear programming problem of minimizing a quadratic function subject to separable quadratic equality constraints. We prove the equivalence of the LRSDP problem with the unconstrained minimization of a new merit function … Read more

A Practical Relative Error Criterion for Augmented Lagrangians

This paper develops a new error criterion for the approximate minimization of augmented Lagrangian subproblems. This criterion is practical in the sense that it requires only information that is ordinarily readily available, such as the gradient (or a subgradient) of the augmented Lagrangian. It is also “relative” in the sense of relative error criteria for … Read more

Derivative-free methods for nonlinear programming with general lower-level constraints

Augmented Lagrangian methods for derivative-free continuous optimization with constraints are introduced in this paper. The algorithms inherit the convergence results obtained by Andreani, Birgin, Martínez and Schuverdt for the case in which analytic derivatives exist and are available. In particular, feasible limit points satisfy KKT conditions under the Constant Positive Linear Dependence (CPLD) constraint qualification. … Read more

Augmented Lagrangian method with nonmonotone penalty parameters for constrained optimization

At each outer iteration of standard Augmented Lagrangian methods one tries to solve a box-constrained optimization problem with some prescribed tolerance. In the continuous world, using exact arithmetic, this subproblem is always solvable. Therefore, the possibility of finishing the subproblem resolution without satisfying the theoretical stopping conditions is not contemplated in usual convergence theories. However, … Read more

A relaxed constant positive linear dependence constraint qualification and applications

In this work we introduce a relaxed version of the constant positive linear dependence constraint qualification (CPLD) that we call RCPLD. This development is inspired by a recent generalization of the constant rank constraint qualification from Minchenko and Stakhovski that was called RCR. We show that RCPLD is enough to ensure the convergence of an … Read more

Local path-following property of inexact interior methods in nonlinear programming

We study the local behavior of a primal-dual inexact interior point methods for solving nonlinear systems arising from the solution of nonlinear optimization problems or more generally from nonlinear complementarity problems. The algorithm is based on the Newton method applied to a sequence of perturbed systems that follows by perturbation of the complementarity equations of … Read more

A Gauss-Newton approach for solving constrained optimization problems using differentiable exact penalties

We propose a Gauss-Newton-type method for nonlinear constrained optimization using the exact penalty introduced recently by Andre and Silva for variational inequalities. We extend their penalty function to both equality and inequality constraints using a weak regularity assumption, and as a result, we obtain a continuously differentiable exact penalty function and a new reformulation of … Read more