Examples of slow convergence for adaptive regularization optimization methods are not isolated

The adaptive regularization algorithm for unconstrained nonconvex
optimization was shown in Nesterov and Polyak (2006) and Cartis, Gould and Toint (2011) to  require, under standard assumptions, at most O(\epsilon^{3/(3-q)}) evaluations of the objective function and its derivatives of degrees one and two to produce an \epsilon-approximate critical point of order q in {1,2}. This bound was shown to be sharp for q in {1,2} by various authors. This note revisits these results and shows that the example for which slow convergence is exhibited is not isolated, but that this behaviour occurs for a subset of univariate functions of nonzero measure.

Article

Download

View PDF