Subsampled Inexact Newton methods for minimizing large sums of convex functions

This paper deals with the minimization of large sum of convex functions by Inexact Newton (IN) methods employing subsampled Hessian approximations. The Conjugate Gradient method is used to compute the inexact Newton step and global convergence is enforced by a nonmonotone line search procedure. The aim is to obtain methods with affordable costs and fast convergence. Assuming strictly convex functions, a set of rules for the forcing parameters and subsample sizes are derived that ensure local linear/superlinear convergence of the proposed methods. The random choice of the Hessian subsample is also investigated and bounds ensuring a good approximation of the true Hessian with some high probability are provided. For such Hessian approximations and suitable forcing terms convergence in the mean square, both for finite and infinite sums of functions, is proved. Finally, convergence of IN methods is investigated in the case of sum of convex function and strongly convex objective function. Numerical results on well known binary classification problems are also given. Adaptive strategies for selecting forcing terms and Hessian subsample size, streaming out of the theoretical analysis, are employed and the numerical results showed that they yield effective IN methods.

Article

Download

View Subsampled Inexact Newton methods for minimizing large sums of convex functions