Composite optimization models via proximal gradient method with a novel enhanced adaptive stepsize

We first consider the convex composite optimization models with the local Lipschitzness condition imposed on the gradient of the differentiable term. The classical proximal gradient method will be studied with our novel enhanced adaptive stepsize selection. To obtain the convergence of the proposed algorithm, we establish a sufficient decrease type inequality associated with our new … Read more