Convergence analysis of an inexact relaxed augmented Lagrangian method

In this paper, we develop an Inexact Relaxed Augmented Lagrangian Method (IR-ALM) for solving a class of convex optimization problems. Flexible relative error criteria are designed for approximately solving the resulting subproblem, and a relaxation step is exploited to accelerate its convergence numerically. By a unified variational analysis, we establish the global convergence of this … Read more

A New Insight on Augmented Lagrangian Method with Applications in Machine Learning

By exploiting double-penalty terms for the primal subproblem, we develop a novel relaxed augmented Lagrangian method for solving a family of convex optimization problems subject to equality or inequality constraints. This new method is then extended to solve a general multi-block separable convex optimization problem, and two related primal-dual hybrid gradient algorithms are also discussed. … Read more