This short article analyzes an interesting property of the Bregman iterative procedure, which is equivalent to the augmented Lagrangian method after a change of variable, for minimizing a convex piece-wise linear function J(x) subject to linear constraints Ax=b. The procedure obtains its solution by solving a sequence of unconstrained subproblems of minimizing J(x)+1/2||Ax-b^k||^2, where b^k is iteratively updated. In practice, the subproblem at each iteration is solved at a relatively low accuracy. Let w^k denote the error introduced by early stopping a subproblem solver at iteration k. We show that if all w^k are sufficiently small so that Bregman iteration enters the optimal face, then while on the optimal face, Bregman iteration enjoys an interesting error-forgetting property: the distance between the current point and the optimal solution set is bounded by ||w^{k+1}-w^k||, independent of all the previous errors. This property partially explains why the Bregman iterative procedure works well for sparse optimization and, in particular, for l1-minimization. The error-forgetting property is unique to $J(x)$ that is a piece-wise linear function (also known as a polyhedral function), and the results of this article appear to be new to the literature of the augmented Lagrangian method.

## Citation

Rice CAAM Report TR12-03, 2012