We consider the variational inequality problem formed by a general set-valued maximal monotone operator and a possibly unbounded ``box'' in $R^n$, and study its solution by proximal methods whose distance regularizations are coercive over the box. We prove convergence for a class of double regularizations generalizing a previously-proposed class of Auslender et al. We apply this class of regularizations to complementarity problems using a dual formulation, leading to the broadened class of generalized augmented Lagrangian methods. We point out some connections between these methods and earlier work on ``pure penalty'' smoothing methods for complementarity; this connection leads to a new augmented Lagrangian based on the ``neural network'' smoothing function. Finally, we computationally compare this augmented Lagrangian to the already-known logarithmic-quadratic variant on the MCPLIB problem library, and show that the neural approach offers some advantages.
Report RRR 29-2003, RUTCOR, 640 Bartholomew Road, Rutgers University, Piscataway NJ 08854 USA, August 2003