Operator splitting schemes are a class of powerful algorithms that solve complicated monotone inclusion and convex optimization problems that are built from many simpler pieces. They give rise to algorithms in which all simple pieces of the decomposition are processed individually. This leads to easily implementable and highly parallelizable or distributed algorithms, which often obtain nearly state-of-the-art performance. In this paper, we analyze the convergence rate of the forward-Douglas-Rachford splitting (FDRS) algorithm, which is a generalization of the forward-backward splitting (FBS) and Douglas-Rachford splitting (DRS) algorithms. Under general convexity assumptions, we derive the ergodic and nonergodic convergence rates of the FDRS algorithm, and show that these rates are the best possible. Under Lipschitz differentiability assumptions, we show that the best iterate of FDRS converges as quickly as the last iterate of the FBS algorithm. Under strong convexity assumptions, we derive convergence rates for a sequence that strongly converges to a minimizer. Under strong convexity and Lipschitz differentiability assumptions, we show that FDRS converges linearly. We also provide examples where the objective is strongly convex, yet FDRS converges arbitrarily slowly. Finally, we relate the FDRS algorithm to a primal-dual forward-backward splitting scheme and clarify its place among existing splitting methods. Our results show that the FDRS algorithm automatically adapts to the regularity of the objective functions and achieves rates that improve upon the sharp worst case rates that hold in the absence of smoothness and strong convexity.
Citation
UCLA CAM report 14-73
Article
View Convergence rate analysis of the forward-Douglas-Rachford splitting scheme