The \emph{alternating direction method of multipliers} (ADMM) is a popular and efficient first-order method that has recently found numerous applications, and the proximal ADMM is an important variant of it. The main contributions of this paper are the proposition and the analysis of a class of inertial proximal ADMMs, which unify the basic ideas of the inertial proximal point method and the proximal ADMM, for linearly constrained separable convex optimization. This class of methods are of inertial nature because at each iteration the proximal ADMM is applied to a point extrapolated at the current iterate in the direction of last movement. The recently proposed inertial primal-dual algorithm \cite[Algorithm 3]{CP14} and the inertial linearized ADMM \cite[Eq. (3.23)]{CMY14a} are covered as special cases. The proposed algorithmic framework is very general in the sense that the weighting matrices in the proximal terms are allowed to be only positive semidefinite, but not necessarily positive definite as required by existing methods of the same kind. By setting the two proximal terms to zero, we obtain an inertial variant of the classical ADMM, which is new to the best of our knowledge. We carry out a unified analysis for the entire class of methods under very mild assumptions. In particular, convergence, as well as asymptotic $o(1/\sqrt{k})$ and nonasymptotic $O(1/\sqrt{k})$ rates of convergence, are established for the best primal function value and feasibility residues, where $k$ denotes the iteration counter. The global iterate convergence of the generated sequence is established under an additional assumption. We also present extensive experimental results on total variation based image reconstruction problems to illustrate the profits gained by introducing the inertial extrapolation steps.
Citation
SIAM Journal on Imaging Sciences, to appear.
Article
View Inertial Proximal ADMM for Linearly Constrained Separable Convex Optimization