A Dwindling Filter Line Search Method for Unconstrained Optimization

In this paper, we propose a new dwindling multidimensional filter second-order line search method for solving large-scale unconstrained optimization problems. Usually, the multidimensional filter is constructed with a fixed envelope, which is a strict condition for the gradient vectors. A dwindling multidimensional filter technique, which is a modification and improvement of the original multidimensional filter, … Read more

A Perry Descent Conjugate Gradient Method with Restricted Spectrum

A new nonlinear conjugate gradient method, based on Perry’s idea, is presented. And it is shown that its sufficient descent property is independent of any line search and the eigenvalues of $P_{k+1}^{\T}P_{k+1}$ are bounded above, where $P_{k+1}$ is the iteration matrix of the new method. Thus, the global convergence is proven by the spectral analysis … Read more

Solving structured nonlinear least-squares and nonlinear feasibility problems with expensive functions

We present an algorithm for nonlinear least-squares and nonlinear feasibility problems, i.e. for systems of nonlinear equations and nonlinear inequalities, which depend on the outcome of expensive functions for which derivatives are assumed to be unavailable. Our algorithm combines derivative-free techniques with filter trust-region methods to keep the number of expensive function evaluations low and … Read more

Generalizations of the limited-memory BFGS method based on quasi-product form of update

Two families of limited-memory variable metric or quasi-Newton methods for unconstrained minimization based on quasi-product form of update are derived. As for the first family, four variants how to utilize the Strang recurrences for the Broyden class of variable metric updates are investigated; three of them use the same number of stored vectors as the … Read more

Derivative-free methods for nonlinear programming with general lower-level constraints

Augmented Lagrangian methods for derivative-free continuous optimization with constraints are introduced in this paper. The algorithms inherit the convergence results obtained by Andreani, Birgin, Martínez and Schuverdt for the case in which analytic derivatives exist and are available. In particular, feasible limit points satisfy KKT conditions under the Constant Positive Linear Dependence (CPLD) constraint qualification. … Read more

On affine scaling inexact dogleg methods for bound-constrained nonlinear systems

A class of trust-region methods for large scale bound-constrained systems of nonlinear equations is presented. The methods in this class follow the so called affine-scaling approach and can efficiently handle large scale problems. At each iteration, a suitably scaled region around the current approximate solution is defined and, within such a region, the norm of … Read more

Nonmonotone Filter Method for Nonlinear Optimization

We propose a new nonmonotone filter method to promote global and fast local convergence for sequential quadratic programming algorithms. Our method uses two filters: a global g-filter for global convergence, and a local nonmonotone l-filter that allows us to establish fast local convergence. We show how to switch between the two filters efficiently, and we … Read more

Switching stepsize strategies for PDIP

In this chapter we present a primal-dual interior point algorithm for solving constrained nonlinear programming problems. Switching rules are implemented that aim at exploiting the merits and avoiding the drawbacks of three different merit functions. The penalty parameter is determined using an adaptive penalty strategy that ensures a descent property for the merit function. The … Read more

A Derivative-Free Algorithm for the Least-square minimization

We develop a framework for a class of derivative-free algorithms for the least-squares minimization problem. These algorithms are based on polynomial interpolation models and are designed to take advantages of the problem structure. Under suitable conditions, we establish the global convergence and local quadratic convergence properties of these algorithms. Promising numerical results indicate the algorithm … Read more

Computational experience with modified conjugate gradient methods for unconstrained optimization

In this report, several modifications of the nonlinear conjugate gradient method are described and investigated. Theoretical properties of these modifications are proved and their practical performance is demonstrated using extensive numerical experiments. CitationTechnical report No. 1038, Institute of Computer Science, Pod Vodarenskou Vezi 2, 18207 Praha 8. December 2008ArticleDownload View PDF