Regional Complexity Analysis of Algorithms for Nonconvex Smooth Optimization

A strategy is proposed for characterizing the worst-case performance of algorithms for solving nonconvex smooth optimization problems. Contemporary analyses characterize worst-case performance by providing, under certain assumptions on an objective function, an upper bound on the number of iterations (or function or derivative evaluations) required until a pth-order stationarity condition is approximately satisfied. This arguably … Read more

An Inexact Regularized Newton Framework with a Worst-Case Iteration Complexity of $\mathcal{O}(\epsilon^{-3/2})$ for Nonconvex Optimization

An algorithm for solving smooth nonconvex optimization problems is proposed that, in the worst-case, takes $\mathcal{O}(\epsilon^{-3/2})$ iterations to drive the norm of the gradient of the objective function below a prescribed positive real number $\epsilon$ and can take $\mathcal{O}(\epsilon^{-3})$ iterations to drive the leftmost eigenvalue of the Hessian of the objective above $-\epsilon$. The proposed … Read more

Complexity Analysis of a Trust Funnel Algorithm for Equality Constrained Optimization

A method is proposed for solving equality constrained nonlinear optimization problems involving twice continuously differentiable functions. The method employs a trust funnel approach consisting of two phases: a first phase to locate an $\epsilon$-feasible point and a second phase to seek optimality while maintaining at least $\epsilon$-feasibility. A two-phase approach of this kind based on … Read more

A Trust Region Algorithm with a Worst-Case Iteration Complexity of ${\cal O}(\epsilon^{-3/2})$ for Nonconvex Optimization

We propose a trust region algorithm for solving nonconvex smooth optimization problems. For any $\bar\epsilon \in (0,\infty)$, the algorithm requires at most $\mathcal{O}(\epsilon^{-3/2})$ iterations, function evaluations, and derivative evaluations to drive the norm of the gradient of the objective function below any $\epsilon \in (0,\bar\epsilon]$. This improves upon the $\mathcal{O}(\epsilon^{-2})$ bound known to hold for … Read more

A Simpler and Tighter Redundant Klee-Minty Construction

By introducing redundant Klee-Minty examples, we have previously shown that the central path can be bent along the edges of the Klee-Minty cubes, thus having $2^n-2$ sharp turns in dimension $n$. In those constructions the redundant hyperplanes were placed parallel with the facets active at the optimal solution. In this paper we present a simpler … Read more

How good are interior point methods? Klee-Minty cubes tighten iteration-complexity bounds.

By refining a variant of the Klee-Minty example that forces the central path to visit all the vertices of the Klee-Minty n-cube, we exhibit a nearly worst-case example for path-following interior point methods. Namely, while the theoretical iteration-complexity upper bound is O(2^{n}n^{\frac{5}{2}}), we prove that solving this n-dimensional linear optimization problem requires at least $2^n-1$ … Read more