The alternating direction method of multipliers (ADMM) is being widely used for various convex programming models with separable structures arising in specifically many scientific computing areas. The ADMM's worst-case O(1/n) convergence rate measured by the iteration complexity has been established in the literature when its penalty parameter is a constant, where n is the iteration counter. Research on ADMM's worst-case O(1/n^2) convergence rate, however, is still in its infancy. In this paper, we suggest applying a rule proposed recently by Chambolle and Pock to iteratively update the penalty parameter and show that ADMM with this adaptive penalty parameter has a worst-case O(1/n^2) convergence rate. Without strong convexity requirement on the objective function, our assumptions on the model are mild and can be satisfied by some representative applications. We test the LASSO model and numerically verify the significant acceleration effectiveness of the faster ADMM with a worst-case O(1/n^2) convergence rate. Moreover, the faster ADMM is more user-favorable than the ADMM with a constant penalty parameter in senses of that it can pursue solutions with very high accuracy and that it is not sensitive to the initial value of the penalty parameter.
Article
View Faster Alternating Direction Method of Multipliers with a Worst-case O(1/n^2) Convergence Rate