In this paper, we propose a new stochastic gradient method for numerical minimization of finite sums. We also propose a modified version of this method applicable on more general problems referred to as infinite sum problems, where the objective function is in the form of mathematical expectation. The method is based on a strategy to exploit the effectiveness of the well-known Barzilai-Borwein (BB) rules or variants of these (BB-like) rules for updating the step length in the standard gradient method. The proposed method adapts the aforementioned strategy into the stochastic framework by exploiting the same Sample Average Aproximations (SAA) estimator of the objective function for several iterations. Furthermore, the sample size is controlled by an additional sampling which also plays a role in accepting the proposed iterate point. Moreover, the number of ``inner" iterations with the same sample is also controlled by an adaptive rule which prevents the method from getting stuck with the same estimator for too long.
Convergence results are discussed for the finite and infinite sum version, for general and strongly convex objective functions. For the strongly convex case, we provide convergence rate and worst-case complexity analysis. Numerical experiments on well-known datasets for binary classifications show very promising performance of the method, without the need to provide special values for hyperparameters on which the method depends.
Article
View Spectral Stochastic Gradient Method with Additional Sampling for Finite and Infinite Sums