Fast Stochastic Second-Order Adagrad for Nonconvex Bound-Constrained Optimization

ADAGB2, a generalization of the Adagrad algorithm for stochastic optimization is introduced, which is also applicable to bound-constrained problems and capable of using second-order information when available. It is shown that, givenĀ  delta in (0,1) and epsilon in (0,1], the ADAGB2 algorithm needs at most O(epsilon^{-2}) iterations to ensure an epsilon-approximate first-order critical point of the bound-constrained problem with probability at least 1-delta, provided the average root mean square error of the gradient oracle is sufficiently small. Should this condition fail, it is also shown that the optimality level of iterates is bounded above by this average. The relation between the approximate and true classical projected-gradient-based optimality measures for bound constrained problems is also investigated, and it is shown that merely assuming unbiased gradient oracles may be insufficient to ensure convergence in O(epsilon^{-2}) iterations.

Article

Download

View PDF