New analysis of linear convergence of gradient-type methods via unifying error bound conditions

The subject of linear convergence of gradient-type methods on non-strongly convex optimization has been widely studied by introducing several notions as sufficient conditions. Influential examples include the error bound property, the restricted strongly convex property, the quadratic growth property, and the Kurdyka-{\L}ojasiewicz property. In this paper, we first define a group of error bound conditions … Read more

Strong local convergence properties of adaptive regularized methods for nonlinear least-squares

This paper studies adaptive regularized methods for nonlinear least-squares problems where the model of the objective function used at each iteration is either the Euclidean residual regularized by a quadratic term or the Gauss-Newton model regularized by a cubic term. For suitable choices of the regularization parameter the role of the regularization term is to … Read more