Global linear convergence of an augmented Lagrangian algorithm for solving convex quadratic optimization problems

We consider an augmented Lagrangian algorithm for minimizing a convex quadratic function subject to linear inequality constraints. Linear optimization is an important particular instance of this problem. We show that, provided the augmentation parameter is large enough, the constraint value converges {\em globally\/} linearly to zero. This property is viewed as a consequence of the proximal interpretation of the algorithm and of the global radial Lipschitz continuity of the reciprocal of the dual function subdifferential. This Lipschitz property is itself obtained by means of a lemma of general interest, which compares the distances from a point in the positive orthant to an affine space, on the one hand, and to the polyhedron given by the intersection of this affine space and the positive orthant, on the other hand. No strict complementarity assumption is needed. The result is illustrated by numerical experiments and algorithmic implications, including complexity issues, are discussed.

Citation

Research Report RR-5028, INRIA Rocquencourt, BP 105, 78153 Le Chesnay, France. To appear in Journal of Convex Analysis.

Article

Download

View Global linear convergence of an augmented Lagrangian algorithm for solving convex quadratic optimization problems