A Dwindling Filter Line Search Method for Unconstrained Optimization

In this paper, we propose a new dwindling multidimensional filter second-order line search method for solving large-scale unconstrained optimization problems. Usually, the multidimensional filter is constructed with a fixed envelope, which is a strict condition for the gradient vectors. A dwindling multidimensional filter technique, which is a modification and improvement of the original multidimensional filter, … Read more

A Perry Descent Conjugate Gradient Method with Restricted Spectrum

A new nonlinear conjugate gradient method, based on Perry’s idea, is presented. And it is shown that its sufficient descent property is independent of any line search and the eigenvalues of $P_{k+1}^{\T}P_{k+1}$ are bounded above, where $P_{k+1}$ is the iteration matrix of the new method. Thus, the global convergence is proven by the spectral analysis … Read more

Updating the regularization parameter in the adaptive cubic regularization algorithm

The adaptive cubic regularization method [Cartis, Gould, Toint, 2009-2010] has been recently proposed for solving unconstrained minimization problems. At each iteration of this method, the objective function is replaced by a cubic approximation which comprises an adaptive regularization parameter whose role is related to the local Lipschitz constant of the objective’s Hessian. We present new … Read more

Derivative-free Optimization of Expensive Functions with Computational Error Using Weighted Regression

We propose a derivative-free algorithm for optimizing computationally expensive functions with computational error. The algorithm is based on the trust region regression method by Conn, Scheinberg, and Vicente [4], but uses weighted regression to obtain more accurate model functions at each trust region iteration. A heuristic weighting scheme is proposed which simultaneously handles i) differing … Read more

On the convergence of trust region algorithms for unconstrained minimization without derivatives

We consider iterative trust region algorithms for the unconstrained minimization of an objective function F(x) of n variables, when F is differentiable but no derivatives are available, and when each model of F is a linear or quadratic polynomial. The models interpolate F at n+1 points, which defines them uniquely when they are linear polynomials. … Read more

Complexity bounds for second-order optimality in unconstrained optimization

This paper examines worst-case evaluation bounds for finding weak minimizers in unconstrained optimization. For the cubic regularization algorithm, Nesterov and Polyak (2006) and Cartis, Gould and Toint (2010) show that at most O(epsilon^{-3}) iterations may have to be performed for finding an iterate which is within epsilon of satisfying second-order optimality conditions. We first show … Read more

The Inexact Spectral Bundle Method for Convex Quadratic Semidefinite Programming

We present an inexact spectral bundle method for solving convex quadratic semidefinite optimization problems. This method is a first-order method, hence requires much less computational cost each iteration than second-order approaches such as interior-point methods. In each iteration of our method, we solve an eigenvalue minimization problem inexactly, and solve a small convex quadratic semidefinite … Read more

On the oracle complexity of first-order and derivative-free algorithms for smooth nonconvex minimization

The (optimal) function/gradient evaluations worst-case complexity analysis available for the Adaptive Regularizations algorithms with Cubics (ARC) for nonconvex smooth unconstrained optimization is extended to finite-difference versions of this algorithm, yielding complexity bounds for first-order and derivative free methods applied on the same problem class. A comparison with the results obtained for derivative-free methods by Vicente … Read more

Beyond symmetric Broyden for updating quadratic models in minimization without derivatives

Some highly successful algorithms for unconstrained minimization without derivatives construct changes to the variables by applying trust region methods to quadratic approximations to the objective function F(x), x in R^n. A quadratic model has (n+1)(n+2)/2 independent parameters, but each new model may interpolate only 2n+1 values of F, for instance. The symmetric Broyden method takes … Read more

A Retrospective Filter Trust Region Algorithm For Unconstrained Optimization

In this paper, we propose a retrospective filter trust region algorithm for unconstrained optimization, which is based on the framework of the retrospective trust region method and associated with the technique of the multi dimensional filter. The new algorithm gives a good estimation of trust region radius, relaxes the condition of accepting a trial step … Read more