A stochastic Levenberg-Marquardt method using random models with complexity results and application to data assimilation

Globally convergent variants of the Gauss-Newton algorithm are often the methods of choice to tackle nonlinear least-squares problems. Among such frameworks, Levenberg-Marquardt and trust-region methods are two well-established, similar paradigms. Both schemes have been studied when the Gauss-Newton model is replaced by a random model that is only accurate with a given probability. Trust-region schemes … Read more

A limited-memory optimization method using the infinitely many times repeated BNS update and conjugate directions

To improve the performance of the limited-memory variable metric L-BFGS method for large scale unconstrained optimization, repeating of some BFGS updates was proposed in [1, 2]. But the suitable extra updates need to be selected carefully, since the repeating process can be time consuming. We show that for the limited-memory variable metric BNS method, matrix … Read more

A family of spectral gradient methods for optimization

We propose a family of spectral gradient methods, whose stepsize is determined by a convex combination of the short Barzilai-Borwein (BB) stepsize and the long BB stepsize. It is shown that each member of the family shares certain quasi-Newton property in the sense of least squares. The family also includes some other gradient methods as … Read more

Derivative-Free Optimization of Noisy Functions via Quasi-Newton Methods

This paper presents a finite difference quasi-Newton method for the minimization of noisy functions. The method takes advantage of the scalability and power of BFGS updating, and employs an adaptive procedure for choosing the differencing interval h based on the noise estimation techniques of Hamming (2012) and Moré and Wild (2011). This noise estimation procedure … Read more

A comparison of methods for traversing non-convex regions in optimization problems

This paper considers again the well-known problem of dealing with non-convex regions during the minimization of a nonlinear function F(x) by Newton-like methods. The proposal made here involves a curvilinear search along an approximation to the continuous steepest descent path defined by the solution of the ODE dx/dt = -grad F(x). The algorithm we develop … Read more

A Newton-CG Algorithm with Complexity Guarantees for Smooth Unconstrained Optimization

We consider minimization of a smooth nonconvex objective function using an iterative algorithm based on Newton’s method and linear conjugate gradient, with explicit detection and use of negative curvature directions for the Hessian of the objective function. The algorithm tracks Newton-conjugate gradient procedures developed in the 1980s closely, but includes enhancements that allow worst-case complexity … Read more

Cubic Regularization Method based on Mixed Factorizations for Unconstrained Minimization

Newton’s method for unconstrained optimization, subject to proper regularization or special trust-region procedures, finds first-order stationary points with precision $\varepsilon$ employing, at most, $O(\varepsilon^{-3/2})$ functional and derivative evaluations. However, the computer work per iteration of the best-known implementations may need several factorizations per iteration or may use rather expensive matrix decompositions. In this paper, we … Read more

Concise Complexity Analyses for Trust-Region Methods

Concise complexity analyses are presented for simple trust region algorithms for solving unconstrained optimization problems. In contrast to a traditional trust region algorithm, the algorithms considered in this paper require certain control over the choice of trust region radius after any successful iteration. The analyses highlight the essential algorithm components required to obtain certain complexity … Read more

Regional Complexity Analysis of Algorithms for Nonconvex Smooth Optimization

A strategy is proposed for characterizing the worst-case performance of algorithms for solving nonconvex smooth optimization problems. Contemporary analyses characterize worst-case performance by providing, under certain assumptions on an objective function, an upper bound on the number of iterations (or function or derivative evaluations) required until a pth-order stationarity condition is approximately satisfied. This arguably … Read more

A structured quasi-Newton algorithm for optimizing with incomplete Hessian information

We present a structured quasi-Newton algorithm for unconstrained optimization problems that have unavailable second-order derivatives or Hessian terms. We provide a formal derivation of the well-known BFGS secant update formula that approximates only the missing Hessian terms, and we propose a line-search quasi-Newton algorithm based on a modification of Wolfe conditions that converges to first-order … Read more