Global linear convergence of an augmented Lagrangian algorithm for solving convex quadratic optimization problems

We consider an augmented Lagrangian algorithm for minimizing a convex quadratic function subject to linear inequality constraints. Linear optimization is an important particular instance of this problem. We show that, provided the augmentation parameter is large enough, the constraint value converges {\em globally\/} linearly to zero. This property is viewed as a consequence of the … Read more

Double-Regularization Proximal Methods, with Complementarity Applications

We consider the variational inequality problem formed by a general set-valued maximal monotone operator and a possibly unbounded “box” in $R^n$, and study its solution by proximal methods whose distance regularizations are coercive over the box. We prove convergence for a class of double regularizations generalizing a previously-proposed class of Auslender et al. We apply … Read more

A new exact penalty function

For constrained smooth or nonsmooth optimization problems, new continuously differentiable penalty functions are derived. They are proved exact in the sense that under some nondegeneracy assumption, local optimizers of a nonlinear program are precisely the optimizers of the associated penalty function. This is achieved by augmenting the dimension of the program by a variable that … Read more

Smoothing Method of Multipliers for Sum-Max Problems

We study nonsmooth unconstrained optimization problem, which includes sum of pairwise maxima of smooth functions. Minimum $l_1$-norm approximation is a particular case of this problem. Combining ideas Lagrange multipliers with smooth approximation of max-type function, we obtain a new kind of nonquadratic augmented Lagrangian. Our approach does not require artificial variables, and preserves sparse structure … Read more

A practical general approximation criterion for methods of multipliers based on Bregman distances

This paper demonstrates that for generalized methods of multipliers for convex programming based on Bregman distance kernels — including the classical quadratic method of multipliers — the minimization of the augmented Lagrangian can be truncated using a simple, generally implementable stopping criterion based only on the norms of the primal iterate and the gradient (or … Read more