A Nonlinear Conjugate Gradient Algorithm with An Optimal Property and An Improved Wolfe Line Search

In this paper, we seek the conjugate gradient direction closest to the direction of the scaled memoryless BFGS method and propose a family of conjugate gradient methods for unconstrained optimization. An improved Wolfe line search is also proposed, which can avoid a numerical drawback of the Wolfe line search and guarantee the global convergence of … Read more

A surrogate management framework using rigorous trust-regions steps

Surrogate models and heuristics are frequently used in the optimization engineering community as convenient approaches to deal with functions for which evaluations are expensive or noisy, or lack convexity. These methodologies do not typically guarantee any type of convergence under reasonable assumptions and frequently render slow convergence. In this paper we will show how to … Read more

A surrogate management framework using rigorous trust-regions steps

Surrogate models and heuristics are frequently used in the optimization engineering community as convenient approaches to deal with functions for which evaluations are expensive or noisy, or lack convexity. These methodologies do not typically guarantee any type of convergence under reasonable assumptions and frequently render slow convergence. In this paper we will show how to … Read more

A Dwindling Filter Line Search Method for Unconstrained Optimization

In this paper, we propose a new dwindling multidimensional filter second-order line search method for solving large-scale unconstrained optimization problems. Usually, the multidimensional filter is constructed with a fixed envelope, which is a strict condition for the gradient vectors. A dwindling multidimensional filter technique, which is a modification and improvement of the original multidimensional filter, … Read more

A Perry Descent Conjugate Gradient Method with Restricted Spectrum

A new nonlinear conjugate gradient method, based on Perry’s idea, is presented. And it is shown that its sufficient descent property is independent of any line search and the eigenvalues of $P_{k+1}^{\T}P_{k+1}$ are bounded above, where $P_{k+1}$ is the iteration matrix of the new method. Thus, the global convergence is proven by the spectral analysis … Read more

Solving structured nonlinear least-squares and nonlinear feasibility problems with expensive functions

We present an algorithm for nonlinear least-squares and nonlinear feasibility problems, i.e. for systems of nonlinear equations and nonlinear inequalities, which depend on the outcome of expensive functions for which derivatives are assumed to be unavailable. Our algorithm combines derivative-free techniques with filter trust-region methods to keep the number of expensive function evaluations low and … Read more

Generalizations of the limited-memory BFGS method based on quasi-product form of update

Two families of limited-memory variable metric or quasi-Newton methods for unconstrained minimization based on quasi-product form of update are derived. As for the first family, four variants how to utilize the Strang recurrences for the Broyden class of variable metric updates are investigated; three of them use the same number of stored vectors as the … Read more

Derivative-free methods for nonlinear programming with general lower-level constraints

Augmented Lagrangian methods for derivative-free continuous optimization with constraints are introduced in this paper. The algorithms inherit the convergence results obtained by Andreani, Birgin, Martínez and Schuverdt for the case in which analytic derivatives exist and are available. In particular, feasible limit points satisfy KKT conditions under the Constant Positive Linear Dependence (CPLD) constraint qualification. … Read more

On affine scaling inexact dogleg methods for bound-constrained nonlinear systems

A class of trust-region methods for large scale bound-constrained systems of nonlinear equations is presented. The methods in this class follow the so called affine-scaling approach and can efficiently handle large scale problems. At each iteration, a suitably scaled region around the current approximate solution is defined and, within such a region, the norm of … Read more

Nonmonotone Filter Method for Nonlinear Optimization

We propose a new nonmonotone filter method to promote global and fast local convergence for sequential quadratic programming algorithms. Our method uses two filters: a global g-filter for global convergence, and a local nonmonotone l-filter that allows us to establish fast local convergence. We show how to switch between the two filters efficiently, and we … Read more