On the complexity of steepest descent, Newton’s and regularized Newton’s methods for nonconvex unconstrained optimization

It is shown that the steepest descent and Newton’s method for unconstrained nonconvex optimization under standard assumptions may be both require a number of iterations and function evaluations arbitrarily close to O(epsilon^{-2}) to drive the norm of the gradient below epsilon. This shows that the upper bound of O(epsilon^{-2}) evaluations known for the steepest descent … Read more