We propose a novel framework for analyzing convergence rates of stochastic optimization algorithms with adaptive step sizes. This framework is based on analysing properties of an underlying generic stochastic process, in particular by deriving a bound on the expected stopping time of this process. We utilise this framework to analyse the bounds on expected global convergence rates of a stochastic variant of a traditional trust region method, introduced in (Chen, Menickelly, Scheinberg, 2014). While traditional trust region methods rely on exact computations of the gradient, Hessian and values of the objective function, this method assumes that these values are available up to some dynamically adjusted accuracy. Moreover, this accuracy is assumed to hold only with some sufficiently large, but fixed, probability, without any additional restrictions on the variance of the errors. This setting applies, for example, to standard stochastic optimization and machine learning formulations. Improving upon the analysis in (Chen et al, 2014), we show that the stochastic process defined by the algorithm satisfies the assumptions of our proposed general framework, with the stopping time defined as reaching accuracy $\|\nabla f(x)\|\leq \epsilon$. The resulting bound for this stopping time is $O(\epsilon^{-2})$, under the assumption of sufficiently accurate stochastic gradient, and is the first global complexity bound for a stochastic trust-region method. Finally, we apply the same analytic framework to derive second order complexity bounds for a second-order algorithmic variant, under an additional assumption on the estimates.
Citation
Technical Report, Oxford University, 2018
Article
View Convergence Rate Analysis of a Stochastic Trust Region Method via Supermartingales