Composite optimization models via proximal gradient method with increasing adaptive stepsizes

We first consider the convex composite optimization models with locally Lipschitz condition imposed on the gradient of the differentiable term. The classical method which is proximal gradient will be studied with our new strategy of stepsize selection. Our proposed stepsize can be computed conveniently by explicit forms. The sequence of our stepsizes is proved to be increasing to a finite positive limitation. The PG method with our stepsize selection is shown to be decreasing and convergent with the complexity computation $O\left(\frac{1}{k}\right)$ for $F(x^k)-F_*.$ This rate is strengthened to be {\it Q-linear} if $f$ is added the locally strong convexity property. To the best of our knowledge, for proximal algorithm using an {\it adaptive} stepsize selection solving convex composite optimization models without globally Lipschitz gradient condition of the smooth term, there has been no method with such convergent properties so far. In addition, we show that our algorithm can be extended for solving a class of nonconvex composite model as complementing the global Lipschitz condition on $\nabla f.$ The significant efficiency of our proposed algorithms is expressed by numerical results for a numerous of applicable test problems.

Article

Download

View Composite optimization models via proximal gradient method with increasing adaptive stepsizes