This work introduces an inexact cubic regularization method with adaptive reuse of Hessian approximations to solve general non-convex optimization problems.
In the proposed approach, the gradient is computed inexactly and updated at every iteration, whereas the Hessian approximation is updated at a specific iteration and then reused for $m$ subsequent iterations (a lazy strategy), where the value of $m$ may vary throughout the procedure.
The method can be implemented either in a Hessian-free or a derivative-free manner. Implementations that approximate derivative information via finite-difference schemes are discussed.
We provide iteration-complexity guarantees showing that the method reaches an approximate critical point. We also establish bounds on the total gradient and function evaluations required, including the case in which only function values are used.
Numerical experiments are reported to illustrate the behavior of the proposed method and to compare its performance with existing lazy cubic algorithms.