Variational analysis perspective on linear convergence of some first order methods for nonsmooth convex optimization problems

We understand linear convergence of some first-order methods such as the proximal gradient method (PGM), the proximal alternating linearized minimization (PALM) algorithm and the randomized block coordinate proximal gradient method (R-BCPGM) for minimizing the sum of a smooth convex function and a nonsmooth convex function from a variational analysis perspective. We introduce a new analytic framework based on some theories on variational analysis such as the error bound/calmness/metric subregularity/bounded metric subregularity. This variational analysis perspective enables us to provide some concrete sufficient conditions for checking linear convergence and applicable approaches for calculating linear convergence rates of these first-order methods for a class of structured convex problems where the smooth part of the objective function is a composite of a smooth strongly convex function and a linear function. We show that these conditions are satisfied automatically, and the modulus for the calmness/metric subregularity is practically computable in some important applications such as in LASSO, fused LASSO and group LASSO. Subsequently, the linear convergence of the first-order methods for these important applications is automatically guaranteed and the convergence rates can be calculated. Therefore, the new perspective enables us to improve some existing results and obtain novel results unknown in the literature. In particular, we improve the result on the linear convergence of the PGM and the PALM for the structured convex problem with a computable error bound estimation. Also for the R-BCPGM for the structured convex problem, we prove that the linear convergence is ensured when the nonsmooth part of the objective function is the group LASSO regularizer.

Citation

First version: February 2018 / Second version: October 2018 / Third version: October 2019

Article

Download

View Variational analysis perspective on linear convergence of some first order methods for nonsmooth convex optimization problems