In this paper, we develop and analyze sub-sampled trust-region methods for solving finite-sum optimization problems. These methods employ subsampling strategies to approximate the gradient and Hessian of the objective function, significantly reducing the overall computational cost. We propose a novel adaptive procedure for deterministically adjusting the sample size used for gradient (or gradient and Hessian) approximations. Furthermore, we establish worst-case iteration complexity bounds for obtaining approximate stationary points. More specifically, for a given $\varepsilon_g, \varepsilon_H\in (0,1)$, it is shown that an $\varepsilon_g$-approximate first-order stationary point is reached in at most $\mathcal{O}({\varepsilon_g}^{-2} )$ iterations, whereas an $(\varepsilon_g,\varepsilon_H)$-approximate second-order stationary point is reached in at most $\mathcal{O}(\max\{\varepsilon_{g}^{-2}\varepsilon_{H}^{-1},\varepsilon_{H}^{-3}\})$ iterations. Finally, numerical experiments illustrate the effectiveness of our new subsampling technique.
Sub-sampled Trust-Region Methods with Deterministic Worst-Case Complexity Guarantees
\(\)