Quasi-Newton updates with weighted secant equations

We provide a formula for variational quasi-Newton updates with multiple weighted secant equations. The derivation of the formula leads to a Sylvester equation in the correction matrix. Examples are given. Citation Report naXys-09-2013, Namur Centre for Complex Systems, Unibersity of Namur, Namur (Belgium) Article Download View Quasi-Newton updates with weighted secant equations

Adaptive Observations And Multilevel Optimization In Data Assimilation

We propose to use a decomposition of large-scale incremental four dimensional (4D-Var) data assimilation problems in order to make their numerical solution more efficient. This decomposition is based on exploiting an adaptive hierarchy of the observations. Starting with a low-cardinality set and the solution of its corresponding optimization problem, observations are adaptively added based on … Read more

An example of slow convergence for Newton’s method on a function with globally Lipschitz continuous Hessian

An example is presented where Newton’s method for unconstrained minimization is applied to find an $\epsilon$-approximate first-order critical point of a smooth function and takes a multiple of $\epsilon^{-2}$ iterations and function evaluations to terminate, which is as many as the steepest-descent method in its worst-case. The novel feature of the proposed example is that … Read more

CUTEst: a Constrained and Unconstrained Testing Environment with safe threads

We describe the most recent evolution of our constrained and unconstrained testing environment and its accompanying SIF decoder. Code-named SIFDecode and CUTEst, these updated versions feature dynamic memory allocation, a modern thread-safe Fortran modular design, a new Matlab interface and a revised installation procedure integrated with GALAHAD. Citation Technical Report Rutherford Appleton Laboratory Chilton, Oxfordshire, … Read more

On the evaluation complexity of constrained nonlinear least-squares and general constrained nonlinear optimization using second-order methods

When solving the general smooth nonlinear optimization problem involving equality and/or inequality constraints, an approximate first-order critical point of accuracy $\epsilon$ can be obtained by a second-order method using cubic regularization in at most $O(\epsilon^{-3/2})$ problem-functions evaluations, the same order bound as in the unconstrained case. This result is obtained by first showing that the … Read more

Worst-case evaluation complexity of non-monotone gradient-related algorithms for unconstrained optimization

The worst-case evaluation complexity of finding an approximate first-order critical point using gradient-related non-monotone methods for smooth nonconvex and unconstrained problems is investigated. The analysis covers a practical linesearch implementation of these popular methods, allowing for an unknown number of evaluations of the objective function (and its gradient) per iteration. It is shown that this … Read more

Conjugate-gradients versus multigrid solvers for diffusion-based correlation models in data assimilation

This paper provides a theoretical and experimental comparison between conjugate-gradients and multigrid, two iterative schemes for solving linear systems, in the context of applying diffusion-based correlation models in data assimilation. In this context, a large number of such systems has to be (approximately) solved if the implicit mode is chosen for integrating the involved diffusion … Read more

How much patience do you have? A worst-case perspective on smooth nonconvex optimization

The paper presents a survey of recent results in the field of worst-case complexity of algorithms for nonlinear (and possibly nonconvex) smooth optimization. Both constrained and unconstrained case are considered. Article Download View How much patience do you have? A worst-case perspective on smooth nonconvex optimization

On the complexity of the steepest-descent with exact linesearches

The worst-case complexity of the steepest-descent algorithm with exact linesearches for unconstrained smooth optimization is analyzed, and it is shown that the number of iterations of this algorithm which may be necessary to find an iterate at which the norm of the objective function’s gradient is less that a prescribed $\epsilon$ is, essentially, a multiple … Read more

Linearizing the Method of Conjugate Gradients

The method of conjugate gradients (CG) is widely used for the iterative solution of large sparse systems of equations $Ax=b$, where $A\in\Re^{n\times n}$ is symmetric positive definite. Let $x_k$ denote the $k$–th iterate of CG. In this paper we obtain an expression for $J_k$, the Jacobian matrix of $x_k$ with respect to $b$. We use … Read more