Subsampled cubic regularization method with distinct sample sizes for function, gradient, and Hessian

We develop and study a subsampled cubic regularization method for finite-sum composite optimization problems, in which the function, gradient, and Hessian are estimated using possibly different sample sizes.
By allowing each quantity to have its own sampling strategy, the proposed method offers greater flexibility to control the accuracy of the model components and to better balance computational effort and estimation quality. Such flexibility is particularly valuable in large-scale settings where the relative cost of evaluating these quantities can vary significantly. We establish iteration-complexity bounds for computing approximate first-order critical points and prove global convergence properties. In addition, we present numerical experiments that illustrate the practical performance of the proposed method.

Article

Download

View PDF