Doubly stochastic primal dual splitting algorithm with variance reduction for saddle point problems

The (structured) saddle-point problem involving the infimal convolution in real Hilbert spaces finds applicability in many applied mathematics disciplines. For this purpose, we develop a stochastic primal-dual splitting (PDS) algorithm with loopless variance-reduction (VR) for solving this generic problem. A PDS algorithm aims to overcome the well-known shortcomings of common splitting methods by solving the primal-dual pair formed by the monotone inclusion and its dual (in the sense of Fenchel–Rockafellar) reformulated as a monotone inclusion problem in a corresponding product space. The stochastic nature of the algorithm prevents from requiring the evaluation of the full gradient at each iteration, operation which can be computationally intensive when realized over the full dataset. The motivation behind variance reduction techniques finds its root in the convergence profile of stochastic first-order methods with fixed step size. From the perspective of the memory vs. computation time tradeoff, loopless VR offers several advantages, in particular in terms of computational time, compared to alternatives such as double-loop VR. In this respect, we first prove the weak almost sure convergence of the iterates; then, demonstrate that our algorithm achieves linear convergence in expectation of its iterates as well as convergence of the (smoothed and duality) gap function value.

Article

Download

View Doubly stochastic primal dual splitting algorithm with variance reduction for saddle point problems