A Stochastic Objective-Function-Free Adaptive Regularization Method with Optimal Complexity

\(\)

A fully stochastic second-order adaptive-regularization method for
unconstrained nonconvex optimization is presented which never computes
the objective-function value, but yet achieves the optimal
$\mathcal{O}(\epsilon^{-3/2})$ complexity bound for finding
first-order critical points. The method is noise-tolerant and the
inexactness conditions required for convergence depend on the history
of past steps. Applications to cases where derivative evaluation is
inexact and to minimization of finite sums by sampling are discussed.
Numerical experiments on large binary classification problems
illustrate the potential of the new method.

Article

Download

View A Stochastic Objective-Function-Free Adaptive Regularization Method with Optimal Complexity