Composite optimization models via proximal gradient method with increasing adaptive stepsizes

We first consider the convex composite optimization models without globally Lipschitz condition imposed on the gradient of the differentiable term. The classical method which is proximal gradient will be studied with our new strategy of stepsize selection. The idea for constructing such a stepsize is motivated by the one in \cite{hoai} that used for gradient descent scheme. All the typical properties of the latter are preserved in our stepsize like: the convenient computation of stepsizes by an explicit form; the increasing of the sequence of stepsize to a finite positive limitation from some fixed iteration. The improvements are not only in expanding the applicable class of problems but also in the capability of increasing step-length compared to the original version in \cite{hoai}. Our proposed method is also proved to be decreasing and convergent with the complexity computation $O\left(\frac{1}{k}\right)$ for $F(x^k)-F_*.$ This rate is strengthened to be {\it Q-linear} if $f$ is added the locally strong convexity property. We show that our algorithm can be extended for solving a class of nonconvex composite model as complementing the global Lipschitz condition on $\nabla f.$ The obtained stepsize for the nonconvex case could be bigger than the first one. As a byproduct, one special version is introduced for the indefinite quadratic case of the smooth term where the adaptive stepsize can be even doubled in length compared to the former. The efficiency of our proposed algorithms is illustrated by numerical results for a set of test instances.

Article

Download

View Composite optimization models via proximal gradient method with increasing adaptive stepsizes