A parallel splitting ALM-based algorithm for separable convex programming

The augmented Lagrangian method (ALM) provides a benchmark for tackling the canonical convex minimization problem with linear constraints. We consider a special case where the objective function is the sum of $m$ individual subfunctions without coupled variables. The recent study reveals that the direct extension of ALM for separable convex programming problems is not necessarily convergent, even though it is empirically efficient for many applications. It has thus inspired a number of variants. In this paper, for parallel computing, we propose a novel operator splitting method for solving the separable convex minimization problem. We first show that a fully Jacobian decomposition of the regularized ALM can contribute a descent direction yielding the contraction of proximity to the solution set; then, we generate the new iterate via a simple correction step with a constant step size. We establish the convergence analysis for the proposed method, and then illustrate its numerical efficiency by the latent variable Gaussian graphical model selection problem arising in statistical learning.

Article

Download

View A parallel splitting ALM-based algorithm for separable convex programming