The alternating direction of multipliers (ADMM) is a form of augmented Lagrangian algorithm that has experienced a renaissance in recent years due to its applicability to optimization problems arising from “big data” and image processing applications, and the relative ease with which it may be implemented in parallel and distributed computational environments. While it is easiest to describe the method as an approximate version of the classical augmented Lagrangian algorithm, using one pass of block coordinate minimization to approximately minimize the augmented Lagrangian at each iteration, the known convergence proofs bear no discernible relationship to this description. In this largely tutorial paper, we try to give an accessible version of the “operator splitting” version of the ADMM convergence proof, first developing some analytical tools that we also use to analyze a simple variant of the classical augmented Lagrangian method. We assume relatively little prior knowledge of convex analysis. Using two dissimilar classes of application problems, we also computationally compare the ADMM to some algorithms that do indeed work by approximately minimizing the augmented Lagrangian. The results suggest that the ADMM is different from such methods not only in its convergence analysis but also in its computational behavior.