A branch and bound algorithm for the global optimization of Hessian Lipschitz continuous functions

We present a branch and bound algorithm for the global optimization of a twice differentiable nonconvex objective function with a Lipschitz continuous Hessian over a compact, convex set. The algorithm is based on applying cubic regularisation techniques to the objective function within an overlapping branch and bound algorithm for convex constrained global optimization. Unlike other … Read more

Trajectory-following methods for large-scale degenerate convex quadratic programming

We consider a class of infeasible, path-following methods for convex quadratric programming. Our methods are designed to be effective for solving both nondegerate and degenerate problems, where degeneracy is understood to mean the failure of strict complementarity at a solution. Global convergence and a polynomial bound on the number of iterations required is given. An … Read more

A Note About The Complexity Of Minimizing Nesterov’s Smooth Chebyshev-Rosenbrock Function

This short note considers and resolves the apparent contradiction between known worst-case complexity results for first and second-order methods for solving unconstrained smooth nonconvex optimization problems and a recent note by Jarre (2011) implying a very large lower bound on the number of iterations required to reach the solution’s neighbourhood for a specific problem with … Read more

Optimal Newton-type methods for nonconvex smooth optimization problems

We consider a general class of second-order iterations for unconstrained optimization that includes regularization and trust-region variants of Newton’s method. For each method in this class, we exhibit a smooth, bounded-below objective function, whose gradient is globally Lipschitz continuous within an open convex set containing any iterates encountered and whose Hessian is $\alpha-$Holder continuous (for … Read more

On the complexity of finding first-order critical points in constrained nonlinear optimization

The complexity of finding epsilon-approximate first-order critical points for the general smooth constrained optimization problem is shown to be no worse that O(epsilon^{-2}) in terms of function and constraints evaluations. This result is obtained by analyzing the worst-case behaviour of a first-order shorts-step homotopy algorithm consisting of a feasibility phase followed by an optimization phase, … Read more

Updating the regularization parameter in the adaptive cubic regularization algorithm

The adaptive cubic regularization method [Cartis, Gould, Toint, 2009-2010] has been recently proposed for solving unconstrained minimization problems. At each iteration of this method, the objective function is replaced by a cubic approximation which comprises an adaptive regularization parameter whose role is related to the local Lipschitz constant of the objective’s Hessian. We present new … Read more

On the evaluation complexity of composite function minimization with applications to nonconvex nonlinear programming

We estimate the worst-case complexity of minimizing an unconstrained, nonconvex composite objective with a structured nonsmooth term by means of some first-order methods. We find that it is unaffected by the nonsmoothness of the objective in that a first-order trust-region or quadratic regularization method applied to it takes at most O($\epsilon^{-2}$) function-evaluations to reduce the … Read more

Complexity bounds for second-order optimality in unconstrained optimization

This paper examines worst-case evaluation bounds for finding weak minimizers in unconstrained optimization. For the cubic regularization algorithm, Nesterov and Polyak (2006) and Cartis, Gould and Toint (2010) show that at most O(epsilon^{-3}) iterations may have to be performed for finding an iterate which is within epsilon of satisfying second-order optimality conditions. We first show … Read more

Evaluation complexity of adaptive cubic regularization methods for convex unconstrained optimization

The adaptive cubic regularization algorithms described in Cartis, Gould & Toint (2009, 2010) for unconstrained (nonconvex) optimization are shown to have improved worst-case efficiency in terms of the function- and gradient-evaluation count when applied to convex and strongly convex objectives. In particular, our complexity upper bounds match in order (as a function of the accuracy … Read more

On the oracle complexity of first-order and derivative-free algorithms for smooth nonconvex minimization

The (optimal) function/gradient evaluations worst-case complexity analysis available for the Adaptive Regularizations algorithms with Cubics (ARC) for nonconvex smooth unconstrained optimization is extended to finite-difference versions of this algorithm, yielding complexity bounds for first-order and derivative free methods applied on the same problem class. A comparison with the results obtained for derivative-free methods by Vicente … Read more