Augmented Lagrangian and Alternating Direction Methods for Convex Optimization: A Tutorial and Some Illustrative Computational Results

The alternating direction of multipliers (ADMM) is a form of augmented Lagrangian algorithm that has experienced a renaissance in recent years due to its applicability to optimization problems arising from “big data” and image processing applications, and the relative ease with which it may be implemented in parallel and distributed computational environments. This paper aims … Read more

Convergence rate and iteration complexity on the alternating direction method of multipliers with a substitution procedure for separable convex programming

Recently, in [17] we have showed the first possibility of combining the Douglas-Rachford alternating direction method of multipliers (ADMM) with a Gaussian back substitution procedure for solving a convex minimization model with a general separable structure. This paper is a further study on theoretical aspects of this theme. We first derive a general algorithmic framework … Read more

Alternating Proximal Gradient Method for Convex Minimization

In this paper, we propose an alternating proximal gradient method that solves convex minimization problems with three or more separable blocks in the objective function. Our method is based on the framework of alternating direction method of multipliers. The main computational effort in each iteration of the proposed method is to compute the proximal mappings … Read more

On the Global and Linear Convergence of the Generalized Alternating Direction Method of Multipliers

The formulation min f(x)+g(y) subject to Ax+By=b, where f and g are extended-value convex functions, arises in many application areas such as signal processing, imaging and image processing, statistics, and machine learning either naturally or after variable splitting. In many common problems, one of the two objective functions is strictly convex and has Lipschitz continuous … Read more

D-ADMM: A Communication-Efficient Distributed Algorithm For Separable Optimization

We propose a distributed algorithm, named D-ADMM, for solving separable optimization problems in networks of interconnected nodes or agents. In a separable optimization problem, the cost function is the sum of all the agents’ private cost functions, and the constraint set is the intersection of all the agents’ private constraint sets. We require the private … Read more

On non-ergodic convergence rate of Douglas-Rachford alternating direction method of multipliers

Recently, a worst-case O(1/t) convergence rate was established for the Douglas-Rachford alternating direction method of multipliers in an ergodic sense. This note proposes a novel approach to derive the same convergence rate while in a non-ergodic sense. ArticleDownload View PDF