Using approximate secant equations in limited memory methods for multilevel unconstrained optimization

The properties of multilevel optimization problems defined on a hierarchy of discretization grids can be used to define approximate secant equations, which describe the second-order behaviour of the objective function. Following earlier work by Gratton and Toint (2009), we introduce a quasi-Newton method (with a linesearch) and a nonlinear conjugate gradient method that both take … Read more

Alternating direction algorithms for total variation deconvolution in image reconstruction

Image restoration and reconstruction from blurry and noisy observation is known to be ill-posed. To stabilize the recovery, total variation (TV) regularization was introduced by Rudin, Osher and Fatemi in \cite{LIR92}, which has demonstrated superiority in preserving image edges. However, the nondifferentiability of TV makes the underlying optimization problems difficult to solve. In this paper, … Read more

On the complexity of steepest descent, Newton’s and regularized Newton’s methods for nonconvex unconstrained optimization

It is shown that the steepest descent and Newton’s method for unconstrained nonconvex optimization under standard assumptions may be both require a number of iterations and function evaluations arbitrarily close to O(epsilon^{-2}) to drive the norm of the gradient below epsilon. This shows that the upper bound of O(epsilon^{-2}) evaluations known for the steepest descent … Read more

A globally convergent modified conjugate-gradient line-search algorithm with inertia controlling

In this paper we have addressed the problem of unboundedness in the search direction when the Hessian is indefinite or near singular. A new algorithm has been proposed which naturally handles singular Hessian matrices, and is theoretically equivalent to the trust-region approach. This is accomplished by performing explicit matrix modifications adaptively that mimic the implicit … Read more

On the Stopping Criterion for Numerical Methods Used to Solve Linear Systems with Additive Gaussian Noise

We consider the inversion of a linear operator with centered Gaussian white noise by MAP estimation with a Gaussian prior distribution on the solution. The actual estimator is computed approximately by a numerical method. We propose a relation between the stationarity measure of this approximate solution to the mean square error of the exact solution. … Read more

A Combined Class of Self-Scaling and Modified Quasi-Newton Methods

Techniques for obtaining safely positive definite Hessian approximations with self-scaling and modified quasi-Newton updates are combined to obtain `better’ curvature approximations in line search methods for unconstrained optimization. It is shown that this class of methods, like the BFGS method has global and superlinear convergence for convex functions. Numerical experiments with this class, using the … Read more

Transformations enabling to construct limited-memory Broyden class methods

The Broyden class of quasi-Newton updates for inverse Hessian approximation are transformed to the formal BFGS update, which makes possible to generalize the well-known Nocedal method based on the Strang recurrences to the scaled limited-memory Broyden family, using the same number of stored vectors as for the limited-memory BFGS method. Two variants are given, the … Read more

Limited-memory projective variable metric methods for unconstrained minimization

A new family of limited-memory variable metric or quasi-Newton methods for unconstrained minimization is given. The methods are based on a positive definite inverse Hessian approximation in the form of the sum of identity matrix and two low rank matrices, obtained by the standard scaled Broyden class update. To reduce the rank of matrices, various … Read more

Computational experience with modified conjugate gradient methods for unconstrained optimization

In this report, several modifications of the nonlinear conjugate gradient method are described and investigated. Theoretical properties of these modifications are proved and their practical performance is demonstrated using extensive numerical experiments. CitationTechnical report No. 1038, Institute of Computer Science, Pod Vodarenskou Vezi 2, 18207 Praha 8. December 2008ArticleDownload View PDF

On solving trust-region and other regularised subproblems in optimization

The solution of trust-region and regularisation subproblems which arise in unconstrained optimization is considered. Building on the pioneering work of Gay, More’ and Sorensen, methods which obtain the solution of a sequence of parametrized linear systems by factorization are used. Enhancements using high-order polynomial approximation and inverse iteration ensure that the resulting method is both … Read more