Composite optimization models via proximal gradient method with a novel enhanced adaptive stepsize

We first consider the convex composite optimization models with the local Lipschitzness condition imposed on the gradient of the differentiable term. The classical proximal gradient method will be studied with our novel enhanced adaptive stepsize selection. To obtain the convergence of the proposed algorithm, we establish a sufficient decrease type inequality associated with our new stepsize choice. This allows us to demonstrate the descent of the objective value from some fixed iteration and yield the sublinear convergence rate of the new method. Especially, in the case of locally strong convexity of the smooth term, our algorithm converges Q-linearly. Additionally, we further show that our method can be applied to nonconvex composite optimization problems provided that the differentiable function has a globally Lipschitz gradient. Finally, the efficiency of our proposed algorithms is shown by numerical results for numerous applicable test instances in comparison with the other state-of-the-art algorithms.

Article

Download

View Composite optimization models via proximal gradient method with a novel enhanced adaptive stepsize