Conjugate-gradients versus multigrid solvers for diffusion-based correlation models in data assimilation

This paper provides a theoretical and experimental comparison between conjugate-gradients and multigrid, two iterative schemes for solving linear systems, in the context of applying diffusion-based correlation models in data assimilation. In this context, a large number of such systems has to be (approximately) solved if the implicit mode is chosen for integrating the involved diffusion … Read more

How much patience do you have? A worst-case perspective on smooth nonconvex optimization

The paper presents a survey of recent results in the field of worst-case complexity of algorithms for nonlinear (and possibly nonconvex) smooth optimization. Both constrained and unconstrained case are considered. ArticleDownload View PDF

Global convergence and the Powell singular function

The Powell singular function was introduced 1962 by M.J.D. Powell as an unconstrained optimization problem. The function is also used as nonlinear least squares problem and system of nonlinear equations. The function is a classic test function included in collections of test problems in optimization as well as an example problem in text books. In … Read more

On the convergence of the modified Levenberg-Marquardt method with a nonmonotone second order Armijo type line search

Recently, Fan [4, Math. Comput., 81 (2012), pp. 447-466] proposed a modified Levenberg-Marquardt (MLM) method for nonlinear equations. Using a trust region technique, global and cubic convergence of the MLM method is proved [4] under the local error bound condition, which is weaker than nonsingularity. The purpose of the paper is to investigate the convergence … Read more

NUMERICAL OPTIMIZATION METHODS FOR BLIND DECONVOLUTION

This paper describes a nonlinear least squares framework to solve a separable nonlinear ill-posed inverse problems that arises in blind deconvolution. It is shown that with proper constraints and well chosen regularization parameters, it is possible to obtain an objective function that is fairly well behaved and the nonlinear minimization problem can be effectively solved … Read more

A NEW PROBABILISTIC ALGORITHM FOR SOLVING NONLINEAR EQUATIONS SYSTEMS

In this paper, we consider a class of optimization problems having the following characteristics: there exists a fixed number k which does not depend on the size n of the problem such that if we randomly change the value of k variables, it has the ability to find a new solution that is better than … Read more

Global Search Strategies for Solving Multilinear Least-squares Problems

The multilinear least-squares (MLLS) problem is an extension of the linear least-squares problem. The difference is that a multilinear operator is used in place of a matrix-vector product. The MLLS is typically a large-scale problem characterized by a large number of local minimizers. It originates, for instance, from the design of filter networks. We present … Read more

Approximate spectral factorization for design of efficient sub-filter sequences

A well-known approach to the design of computationally efficient filters is to use spectral factorization, i.e. a decomposition of a filter into a sequence of sub-filters. Due to the sparsity of the sub-filters, the typical processing speedup factor is within the range 1-10 in 2D, and for 3D it achieves 10-100. The design of such … Read more

Convergence analysis of a proximal Gauss-Newton method

An extension of the Gauss-Newton algorithm is proposed to find local minimizers of penalized nonlinear least squares problems, under generalized Lipschitz assumptions. Convergence results of local type are obtained, as well as an estimate of the radius of the convergence ball. Some applications for solving constrained nonlinear equations are discussed and the numerical performance of … Read more

On the convergence of an inexact Gauss-Newton trust-region method for nonlinear least-squares problems with simple bounds

We introduce an inexact Gauss-Newton trust-region method for solving bound-constrained nonlinear least-squares problems where, at each iteration, a trust-region subproblem is approximately solved by the Conjugate Gradient method. Provided a suitable control on the accuracy to which we attempt to solve the subproblems, we prove that the method has global and asymptotic fast convergence properties. … Read more