Complexity bounds for second-order optimality in unconstrained optimization

This paper examines worst-case evaluation bounds for finding weak minimizers in unconstrained optimization. For the cubic regularization algorithm, Nesterov and Polyak (2006) and Cartis, Gould and Toint (2010) show that at most O(epsilon^{-3}) iterations may have to be performed for finding an iterate which is within epsilon of satisfying second-order optimality conditions. We first show … Read more

The Inexact Spectral Bundle Method for Convex Quadratic Semidefinite Programming

We present an inexact spectral bundle method for solving convex quadratic semidefinite optimization problems. This method is a first-order method, hence requires much less computational cost each iteration than second-order approaches such as interior-point methods. In each iteration of our method, we solve an eigenvalue minimization problem inexactly, and solve a small convex quadratic semidefinite … Read more

On the oracle complexity of first-order and derivative-free algorithms for smooth nonconvex minimization

The (optimal) function/gradient evaluations worst-case complexity analysis available for the Adaptive Regularizations algorithms with Cubics (ARC) for nonconvex smooth unconstrained optimization is extended to finite-difference versions of this algorithm, yielding complexity bounds for first-order and derivative free methods applied on the same problem class. A comparison with the results obtained for derivative-free methods by Vicente … Read more

Beyond symmetric Broyden for updating quadratic models in minimization without derivatives

Some highly successful algorithms for unconstrained minimization without derivatives construct changes to the variables by applying trust region methods to quadratic approximations to the objective function F(x), x in R^n. A quadratic model has (n+1)(n+2)/2 independent parameters, but each new model may interpolate only 2n+1 values of F, for instance. The symmetric Broyden method takes … Read more

A Retrospective Filter Trust Region Algorithm For Unconstrained Optimization

In this paper, we propose a retrospective filter trust region algorithm for unconstrained optimization, which is based on the framework of the retrospective trust region method and associated with the technique of the multi dimensional filter. The new algorithm gives a good estimation of trust region radius, relaxes the condition of accepting a trial step … Read more

On the convergence of a wide range of trust region methods for unconstrained optimization

We consider trust region methods for seeking the unconstrained minimum of an objective function F(x), x being the vector of variables, when the gradient grad F is available. The methods are iterative with a starting point x_1 being given. The new vector of variables x_(k+1) is derived from a quadratic approximation to F that interpolates … Read more

A three-term conjugate gradient method with sufficient descent property for unconstrained optimization

Conjugate gradient methods are widely used for solving large-scale unconstrained optimization problems, because they do not need the storage of matrices. In this paper, we propose a general form of three-term conjugate gradient methods which always generate a sufficient descent direction. We give a sufficient condition for the global convergence of the proposed general method. … Read more

A Proximal Algorithm with Quasi Distance. Application to Habit’s Formation

We consider a proximal algorithm with quasi distance applied to nonconvex and nonsmooth functions involving analytic properties for an unconstrained minimization problem. We show the behavioral importance of this proximal point model for habit’s formation in Decision and Making Sciences. ArticleDownload View PDF

A Limited Memory Steepest Descent Method

The possibilities inherent in steepest descent methods have been considerably amplified by the introduction of the Barzilai-Borwein choice of step-size, and other related ideas. These methods have proved to be competitive with conjugate gradient methods for the minimization of large dimension unconstrained minimization problems. This paper suggests a method which is able to take advantage … Read more

Using approximate secant equations in limited memory methods for multilevel unconstrained optimization

The properties of multilevel optimization problems defined on a hierarchy of discretization grids can be used to define approximate secant equations, which describe the second-order behaviour of the objective function. Following earlier work by Gratton and Toint (2009), we introduce a quasi-Newton method (with a linesearch) and a nonlinear conjugate gradient method that both take … Read more