Multi-Secant Equations, Approximate Invariant Subspaces and Multigrid Optimization

New approximate secant equations are shown to result from the knowledge of (problem dependent) invariant subspace information, which in turn suggests improvements in quasi-Newton methods for unconstrained minimization. It is also shown that this type of information may often be extracted from the multigrid structure of discretized infinite dimensional problems. A new limited-memory BFGS using … Read more

A Retrospective Trust-Region Method for Unconstrained Optimization

We introduce a new trust-region method for unconstrained optimization where the radius update is computed using the model information at the current iterate rather than at the preceding one. The update is then performed according to how well the current model retrospectively predicts the value of the objective function at last iterate. Global convergence to … Read more

Bracketing an Optima in Univariate Optimization

In this article, we consider some problems of bracketing an optimum point for a real-valued, single variable function. We show that, no method, satisfying certain assumptions and requiring a bounded number of function evaluations, can exist to bracket the minimum point of a unimodal function. A similar result is given also for the problem of … Read more

A multilevel algorithm for solving the trust-region subproblem

We present a multilevel numerical algorithm for the exact solution of the Euclidean trust-region subproblem. This particular subproblem typically arises when optimizing a nonlinear (possibly non-convex) objective function whose variables are discretized continuous functions, in which case the different levels of discretization provide a natural multilevel context. The trust-region problem is considered at the highest … Read more

Adaptive cubic overestimation methods for unconstrained optimization

An Adaptive Cubic Overestimation (ACO) algorithm for unconstrained optimization is proposed, generalizing at the same time an unpublished method due to Griewank (Technical Report NA/12, 1981, DAMTP, Univ. of Cambridge), an algorithm by Nesterov & Polyak (Math. Programming 108(1), 2006, pp 177-205) and a proposal by Weiser, Deuflhard & Erdmann (Optim. Methods Softw. 22(3), 2007, … Read more

Duality in quasi-newton methods and new variational characterizations of the DFP and BFGS updates

It is known that quasi-Newton updates can be characterized by variational means, sometimes in more than one way. This paper has two main goals. We first formulate variational problems appearing in quasi-Newton methods within the space of symmetric matrices. This simplies both their formulations and their subsequent solutions. We then construct, for the first time, … Read more

Iterative Minimization Schemes for Solving the Single Source Localization Problem

We consider the problem of locating a single radiating source from several noisy measurements using a maximum likelihood (ML) criteria. The resulting optimization problem is nonconvex and nonsmooth and thus finding its global solution is in principal a hard task. Exploiting the special structure of the objective function, we introduce and analyze two iterative schemes … Read more

A 2-BFGS updating in a trust region framework

We present a new matrix-free method for the trust region subproblem, assuming that the approximate Hessian is updated by the limited memory BFGS formula with m = 2. The resulting updating scheme, called 2-BFGS, give us the ability to determine via simple formulas the eigenvalues of the resulting approximation. Thus, at each iteration, we can … Read more

Developments of NEWUOA for unconstrained minimization without derivatives

The NEWUOA software is described briefly, with some numerical results that show good efficiency and accuracy in the unconstrained minimization without derivatives of functions of up to 320 variables. Some preliminary work on an extension of NEWUOA that allows simple bounds on the variables is also described. It suggests a variation of a technique in … Read more

A view of algorithms for optimization without derivatives

Let the least value of a function of many variables be required. If its gradient is available, then one can tell whether search directions are downhill, and first order conditions help to identify the solution. It seems in practice, however, that the vast majority of unconstrained calculations do not employ any derivatives. A view of … Read more