Quadratic Regularization Methods with Finite-Difference Gradient Approximations

This paper presents two quadratic regularization methods with finite-difference gradient approximations for smooth unconstrained optimization problems. One method is based on forward finite-difference gradients, while the other is based on central finite-difference gradients. In both methods, the accuracy of the gradient approximations and the regularization parameter in the quadratic models are jointly adjusted using a nonmonotone acceptance condition for the trial points. When the objective function is bounded from below and has Lipschitz continuous gradient, it is shown that the method based on forward finite-difference gradients needs at most $\mathcal{O}\left(n\epsilon^{-2}\right)$ function evaluations to generate a $\epsilon$-approximate stationary point, where $n$ is the problem dimension. Under the additional assumption that the Hessian of the objective is Lipschitz continuous, an evaluation complexity bound of the same order is proved for the method based on central finite-difference gradients. Numerical results are also presented. They confirm the theoretical findings and illustrate the relative efficient of the proposed methods.

Article

Download

View Quadratic Regularization Methods with Finite-Difference Gradient Approximations