Modifications of the limited-memory BNS method for better satisfaction of previous quasi-Newton conditions

Several modifications of the limited-memory variable metric BNS method for large scale unconstrained optimization are proposed, which consist in corrections (derived from the idea of conjugate directions) of the used di®erence vectors to improve satisfaction of previous quasi-Newton conditions, utilizing information from previous or subsequent iterations. In case of quadratic objective functions, conjugacy of all … Read more

A new family of high order directions for unconstrained optimization inspired by Chebyshev and Shamanskii methods

The 1669-1670 Newton-Raphson’s method is still used to solve equations systems and unconstrained optimization problems. Since this method, some other algorithms inspired by Newton’s have been proposed: in 1839 Chebyshev developped a high order cubical convergence algorithm, and in 1967 Shamanskii proposed an acceleration of Newton’s method. By considering a Newton-type methods as displacement directions, … Read more

Sobolev Seminorm of Quadratic Functions with Applications to Derivative-Free Optimization

This paper studies the $H^1$ Sobolev seminorm of quadratic functions. The research is motivated by the least-norm interpolation that is widely used in derivative-free optimization. We express the $H^1$ seminorm of a quadratic function explicitly in terms of the Hessian and the gradient when the underlying domain is a ball. The seminorm gives new insights … Read more

A globally and R-linearly convergent hybrid HS and PRP method and its inexact version with applications

A hybrid HS and PRP type conjugate gradient method for smooth optimization is presented, which reduces to the classical RPR or HS method if exact linear search is used and converges globally and R-linearly for nonconvex functions with an inexact backtracking line search under standard assumption. An inexact version of the proposed method which admits … Read more

Sensitivity analysis for two-level value functions with applications to bilevel programming

This paper contributes to a deeper understanding of the link between a now conventional framework in hierarchical optimization spread under the name of the optimistic bilevel problem and its initial more dicult formulation that we call here the original optimistic bilevel optimization problem. It follows from this research that, although the process of deriving necessary … Read more

Conjugate gradient methods based on secant conditions that generate descent search directions for unconstrained optimization

Conjugate gradient methods have been paid attention to, because they can be directly applied to large-scale unconstrained optimization problems. In order to incorporate second order information of the objective function into conjugate gradient methods, Dai and Liao (2001) proposed a conjugate gradient method based on the secant condition. However, their method does not necessarily generate … Read more

A short note on the global convergence of the unmodified PRP method

It is well-known that the search direction generated by the standard (unmodified) PRP nonlinear conjugate gradient method is not necessarily a descent direction of the objective function, which brings difficulty for its global convergence for general functions. However, to our surprise, it is easily proved in this short note that the unmodified PRP method still … Read more

A conjugate directions approach to improve the limited-memory BFGS method

Simple modifiations of the limited-memory BFGS method (L-BFGS) for large scale unconstrained optimization are considered, which consist in corrections (derived from the idea of conjugate directions) of the used difference vectors, utilizing information from the preceding iteration. In case of quadratic objective functions, the improvement of convergence is the best one in some sense and … Read more

Convergence of the restricted Nelder-Mead algorithm in two dimensions

The Nelder-Mead algorithm, a longstanding direct search method for unconstrained optimization published in 1965, is designed to minimize a scalar-valued function $f$ of $n$ real variables using only function values, without any derivative information. Each Nelder–Mead iteration is associated with a nondegenerate simplex defined by $n + 1$ vertices and their function values; a typical … Read more

A Note About The Complexity Of Minimizing Nesterov’s Smooth Chebyshev-Rosenbrock Function

This short note considers and resolves the apparent contradiction between known worst-case complexity results for first and second-order methods for solving unconstrained smooth nonconvex optimization problems and a recent note by Jarre (2011) implying a very large lower bound on the number of iterations required to reach the solution’s neighbourhood for a specific problem with … Read more