We first consider the convex composite optimization models (COM) with the locally Lipschitz condition
imposed on the gradient of the differentiable term. The classical proximal gradient method (PG) will be
studied with our new strategy of stepsize selection. This is conveniently computed via an enhanced
adaptive and closed formula. The sequence of our new stepsizes is proved to be increasing to a positive limit after a finite number of iterations. In order to obtain the convergence of the corresponding
proximal gradient algorithm, we establish a sufficient decrease type inequality associated with our new
explicit stepsize choice. This allows us to demonstrate the descent of the objective value from some
fixed iteration and yield the standard sublinear convergence rate of our PG algorithm. Especially, in the case of locally strong convexity of the smooth term, our PG algorithm converges Q-linearly. Additionally, we further show that our method can be extended to a broad class of nonconvex COM provided that the differentiable function is assumed to have a globally Lipschitz gradient. Finally, the significant efficiency of our proposed method is shown by numerical results for numerous applicable test instances in comparison with the other recent methods.
Article
View Composite optimization models via proximal gradient method with enhanced adaptive stepsizes