Refining asymptotic complexity bounds for nonconvex optimization methods, including why steepest descent is o(eps^{-2}) rather than O(eps^{-2})

\(\)

We revisit the standard “telescoping sum” argument ubiquitous in the
final steps of analyzing evaluation complexity of algorithms for
smooth nonconvex optimization, and obtain a refined formulation of the
resulting bound as a function of the requested accuracy eps. While bounds
obtained using the standard argument typically are of the form
\(O(\epsilon^{-\alpha})\) for some positive alpha, the refined
results are of the form \(o(\epsilon^{-\alpha})\). We then explore to
which known algorithms our refined bounds are applicable and finally
describe an example showing how close the standard and refined bounds
can be.

Article

Download

View PDF