In the development of first-order methods for smooth (resp., composite) convex optimization problems minimizing smooth functions, the gradient (resp., gradient mapping) norm is a fundamental optimality measure for which a regularization technique of first-order methods is known to be nearly optimal. In this paper, we report an adaptive regularization approach attaining this iteration complexity without the prior knowledge of the distance from the initial point to the optimal solution set, which was required to be known in the existing regularization approach. To obtain further faster convergence adaptively, we secondly apply this approach to construct a first-order method that is adaptive to the Hölderian error bound condition (or equivalently, the Lojasiewicz gradient property) which covers moderately wide class of applications. The proposed method attains the nearly optimal iteration complexity with respect to the gradient mapping norm.
View Nearly optimal first-order methods for convex optimization under gradient norm measure: An adaptive regularization approach