In this paper, we study the local linear convergence properties of a versatile class of Primal–Dual splitting methods for minimizing composite non-smooth convex optimization problems. Under the assumption that the non-smooth components of the problem are partly smooth relative to smooth manifolds, we present a unified local convergence analysis framework for these Primal–Dual splitting methods. More precisely, in our framework we first show that (i) the sequences generated by Primal–Dual splitting methods identify a pair of primal and dual smooth manifolds in a finite number of iteration, and then (ii) enter a local linear convergence regime, which is for instance characterized in terms of the structure of the underlying active smooth manifolds. We also show how our results for Primal–Dual splitting specialize to cover existing one on Forward–Backward splitting and Douglas–Rachford splitting/ADMM (alternating direction methods of multipliers). Moreover, based on these obtained local convergence analysis result, several practical acceleration techniques for the class of Primal–Dual splitting methods are discussed. To exemplify the usefulness of the obtained result, we consider several concrete numerical experiments arising from applicative fields including signal/image processing, inverse problems and machine learning, etc. The demonstration not only verify the local linear convergence behaviour of Primal–Dual splitting methods, but also the insights on how to accelerate them in practice.