A New Insight on Augmented Lagrangian Method with Applications in Machine Learning

By exploiting double-penalty terms for the primal subproblem, we develop a novel relaxed augmented Lagrangian method for solving a family of convex optimization problems subject to equality or inequality constraints. This new method is then extended to solve a general multi-block separable convex optimization problem, and two related primal-dual hybrid gradient algorithms are also discussed. Convergence results about the sublinear and
linear convergence rates are established by variational characterizations for both the saddlepoint of the problem and the first-order optimality conditions of involved subproblems. A large number of experiments on testing the linear support vector machine problem and the robust principal component analysis problem arising from machine learning indicate that our proposed algorithms perform much better than several state-of-the-art algorithms.

Article

Download

View A New Insight on Augmented Lagrangian Method with Applications in Machine Learning