A novel sequential optimality condition for smooth constrained optimization and algorithmic consequences

In the smooth constrained optimization setting, this work introduces the Domain Complementary Approximate Karush-Kuhn-Tucker (DCAKKT) condition, inspired by a sequential optimality condition recently devised for nonsmooth constrained optimization problems. It is shown that the augmented Lagrangian method can generate limit points satisfying DCAKKT, and it is proved that such a condition is not related to … Read more

Global Convergence of Augmented Lagrangian Method Applied to Mathematical Program with Switching Constraints

The mathematical program with switching constraints (MPSC) is a kind of problems with disjunctive constraints. The existing convergence results cannot directly be applied to this kind of problem since the required constraint qualifications for ensuring the convergence are very likely to fail. In this paper, we apply the augmented Lagrangian method (ALM) to solve the … Read more

Global Complexity Bound of a Proximal ADMM for Linearly-Constrained Nonseperable Nonconvex Composite Programming

This paper proposes and analyzes a dampened proximal alternating direction method of multipliers (DP.ADMM) for solving linearly-constrained nonconvex optimization problems where the smooth part of the objective function is nonseparable. Each iteration of DP.ADMM consists of: (ii) a sequence of partial proximal augmented Lagrangian (AL) updates, (ii) an under-relaxed Lagrange multiplier update, and (iii) a … Read more

Dual descent ALM and ADMM

\(\) Classical primal-dual algorithms attempt to solve $\max_{\mu}\min_{x} \mathcal{L}(x,\mu)$ by alternatively minimizing over the primal variable $x$ through primal descent and maximizing the dual variable $\mu$ through dual ascent. However, when $\mathcal{L}(x,\mu)$ is highly nonconvex with complex constraints in $x$, the minimization over $x$ may not achieve global optimality, and hence the dual ascent step … Read more

A New Insight on Augmented Lagrangian Method with Applications in Machine Learning

By exploiting double-penalty terms for the primal subproblem, we develop a novel relaxed augmented Lagrangian method for solving a family of convex optimization problems subject to equality or inequality constraints. This new method is then extended to solve a general multi-block separable convex optimization problem, and two related primal-dual hybrid gradient algorithms are also discussed. … Read more

Algorithms for Difference-of-Convex (DC) Programs Based on Difference-of-Moreau-Envelopes Smoothing

In this paper we consider minimization of a difference-of-convex (DC) function with and without linear constraints. We first study a smooth approximation of a generic DC function, termed difference-of-Moreau-envelopes (DME) smoothing, where both components of the DC function are replaced by their respective Moreau envelopes. The resulting smooth approximation is shown to be Lipschitz differentiable, … Read more

Decomposition Methods for Global Solutions of Mixed-Integer Linear Programs

This paper introduces two decomposition-based methods for two-block mixed-integer linear programs (MILPs), which aim to take advantage of separable structures of the original problem by solving a sequence of lower-dimensional MILPs. The first method is based on the $\ell_1$-augmented Lagrangian method (ALM), and the second one is based on a modified alternating direction method of … Read more

Faster Lagrangian-Based Methods in Convex Optimization

In this paper, we aim at unifying, simplifying, and improving the convergence rate analysis of Lagrangian-based methods for convex optimization problems. We first introduce the notion of nice primal algorithmic map, which plays a central role in the unification and in the simplification of the analysis of all Lagrangian-based methods. Equipped with a nice primal … Read more

A Two-level ADMM Algorithm for AC OPF with Convergence Guarantees

This paper proposes a two-level distributed algorithmic framework for solving the AC optimal power flow (OPF) problem with convergence guarantees. The presence of highly nonconvex constraints in OPF poses significant challenges to distributed algorithms based on the alternating direction method of multipliers (ADMM). In particular, convergence is not provably guaranteed for nonconvex network optimization problems … Read more

On scaled stopping criteria for a safeguarded augmented Lagrangian method with theoretical guarantees

This paper discusses the use of a stopping criterion based on the scaling of the Karush-Kuhn-Tucker (KKT) conditions by the norm of the approximate Lagrange multiplier in the ALGENCAN implementation of a safeguarded augmented Lagrangian method. Such stopping criterion is already used in several nonlinear programming solvers, but it has not yet been considered in … Read more