Adaptive cubic overestimation methods for unconstrained optimization

An Adaptive Cubic Overestimation (ACO) algorithm for unconstrained optimization is proposed, generalizing at the same time an unpublished method due to Griewank (Technical Report NA/12, 1981, DAMTP, Univ. of Cambridge), an algorithm by Nesterov & Polyak (Math. Programming 108(1), 2006, pp 177-205) and a proposal by Weiser, Deuflhard & Erdmann (Optim. Methods Softw. 22(3), 2007, … Read more

A recursive trust-region method in infinity norm for bound-constrained nonlinear optimization

A recursive trust-region method is introduced for the solution of bound-constrained nonlinear nonconvex optimization problems for which a hierarchy of descriptions exists. Typical cases are infinite-dimensional problems for which the levels of the hierarchy correspond to discretization levels, from coarse to fine. The new method uses the infinity norm to define the shape of the … Read more

Nonlinear programming without a penalty function or a filter

A new method is introduced for solving equality constrained nonlinear optimization problems. This method does not use a penalty function, nor a barrier or a filter, and yet can be proved to be globally convergent to first-order stationary points. It uses different trust-regions to cope with the nonlinearities of the objective function and the constraints, … Read more

A Brief History of Filter Methods

We consider the question of global convergence of iterative methods for nonlinear programming problems. Traditionally, penalty functions have been used to enforce global convergence. In this paper we review a recent alternative, so-called filter methods. Instead of combing the objective and constraint violation into a single function, filter methods view nonlinear optimization as a biobjective … Read more

Numerical Experience with a Recursive Trust-Region Method for Multilevel Nonlinear Optimization

We consider an implementation of the recursive multilevel trust-region algorithm proposed by Gratton, Sartenaer, Toint (2004), and provide significant numerical experience on multilevel test problems. A suitable choice of the algorithm’s parameters is identified on these problems, yielding a very satisfactory compromise between reliability and efficiency. The resulting default algorithm is then compared to alternative … Read more

Recognizing Underlying Sparsity in Optimization

Exploiting sparsity is essential to improve the efficiency of solving large optimization problems. We present a method for recognizing the underlying sparsity structure of a nonlinear partially separable problem, and show how the sparsity of the Hessian matrices of the problem’s functions can be improved by performing a nonsingular linear transformation in the space corresponding … Read more

Second-order convergence properties of trust-region methods using incomplete curvature information, with an application to multigrid optimization

Convergence properties of trust-region methods for unconstrained nonconvex optimization is considered in the case where information on the objective function’s local curvature is incomplete, in the sense that it may be restricted to a fixed set of “test directions” and may not be available at every iteration. It is shown that convergence to local “weak” … Read more

Sensitivity of trust-region algorithms on their parameters

In this paper, we examine the sensitivity of trust-region algorithms on the parameters related to the step acceptance and update of the trust region. We show, in the context of unconstrained programming, that the numerical efficiency of these algorithms can easily be improved by choosing appropriate parameters. Recommanded ranges of values for these parameters are … Read more

Recursive Trust-Region Methods for Multilevel Nonlinear Optimization (Part I): Global Convergence and Complexity

A class of trust-region methods is presented for solving unconstrained nonlinear and possibly nonconvex discretized optimization problems, like those arising in systems governed by partial differential equations. The algorithms in this class make use of the discretization level as a mean of speeding up the computation of the step. This use is recursive, leading to … Read more

A filter-trust-region method for unconstrained optimization

A new filter-trust-region algorithm for solving unconstrained nonlinear optimization problems is introduced. Based on the filter technique introduced by Fletcher and Leyffer, it extends an existing technique of Gould, Leyffer and Toint (SIAM J. Optim., to appear 2004) for nonlinear equations and nonlinear least-squares to the fully general unconstrained optimization problem. The new algorithm is … Read more