Conjugate gradient methods based on secant conditions that generate descent search directions for unconstrained optimization

Conjugate gradient methods have been paid attention to, because they can be directly applied to large-scale unconstrained optimization problems. In order to incorporate second order information of the objective function into conjugate gradient methods, Dai and Liao (2001) proposed a conjugate gradient method based on the secant condition. However, their method does not necessarily generate … Read more

A short note on the global convergence of the unmodified PRP method

It is well-known that the search direction generated by the standard (unmodified) PRP nonlinear conjugate gradient method is not necessarily a descent direction of the objective function, which brings difficulty for its global convergence for general functions. However, to our surprise, it is easily proved in this short note that the unmodified PRP method still … Read more

A conjugate directions approach to improve the limited-memory BFGS method

Simple modifiations of the limited-memory BFGS method (L-BFGS) for large scale unconstrained optimization are considered, which consist in corrections (derived from the idea of conjugate directions) of the used difference vectors, utilizing information from the preceding iteration. In case of quadratic objective functions, the improvement of convergence is the best one in some sense and … Read more

Convergence of the restricted Nelder-Mead algorithm in two dimensions

The Nelder-Mead algorithm, a longstanding direct search method for unconstrained optimization published in 1965, is designed to minimize a scalar-valued function $f$ of $n$ real variables using only function values, without any derivative information. Each Nelder–Mead iteration is associated with a nondegenerate simplex defined by $n + 1$ vertices and their function values; a typical … Read more

A Note About The Complexity Of Minimizing Nesterov’s Smooth Chebyshev-Rosenbrock Function

This short note considers and resolves the apparent contradiction between known worst-case complexity results for first and second-order methods for solving unconstrained smooth nonconvex optimization problems and a recent note by Jarre (2011) implying a very large lower bound on the number of iterations required to reach the solution’s neighbourhood for a specific problem with … Read more

A Nonlinear Conjugate Gradient Algorithm with An Optimal Property and An Improved Wolfe Line Search

In this paper, we seek the conjugate gradient direction closest to the direction of the scaled memoryless BFGS method and propose a family of conjugate gradient methods for unconstrained optimization. An improved Wolfe line search is also proposed, which can avoid a numerical drawback of the Wolfe line search and guarantee the global convergence of … Read more

On Nesterov’s Smooth Chebyshev-Rosenbrock Function

We discuss a modification of the chained Rosenbrock function introduced by Nesterov, a polynomial of degree four of $n$ variables. Its only stationary point is the global minimizer with optimal value zero. An initial point is given such that any continuous piecewise linear descent path consists of at least an exponential number of $0.72 \cdot … Read more

A Matrix-Free Approach For Solving The Gaussian Process Maximum Likelihood Problem

Gaussian processes are the cornerstone of statistical analysis in many application ar- eas. Nevertheless, most of the applications are limited by their need to use the Cholesky factorization in the computation of the likelihood. In this work, we present a matrix-free approach for comput- ing the solution of the maximum likelihood problem involving Gaussian processes. … Read more

DIFFERENCE FILTER PRECONDITIONING FOR LARGE COVARIANCE MATRICES

In many statistical applications one must solve linear systems corresponding to large, dense, and possibly irregularly structured covariance matrices. These matrices are often ill- conditioned; for example, the condition number increases at least linearly with respect to the size of the matrix when observations of a random process are obtained from a xed domain. This … Read more

Global Convergence of Radial Basis Function Trust Region Derivative-Free Algorithms

We analyze globally convergent derivative-free trust region algorithms relying on radial basis function interpolation models. Our results extend the recent work of Conn, Scheinberg, and Vicente to fully linear models that have a nonlinear term. We characterize the types of radial basis functions that fit in our analysis and thus show global convergence to first-order … Read more