On enhanced KKT optimality conditions for smooth nonlinear optimization

The Fritz-John (FJ) and KKT conditions are fundamental tools for characterizing minimizers and form the basis of almost all methods for constrained optimization. Since the seminal works of Fritz John, Karush, Kuhn and Tucker, FJ/KKT conditions have been enhanced by adding extra necessary conditions. Such an extension was initially proposed by Hestenes in the 1970s … Read more

An adaptive superfast inexact proximal augmented Lagrangian method for smooth nonconvex composite optimization problems

This work presents an adaptive superfast proximal augmented Lagrangian (AS-PAL) method for solving linearly-constrained smooth nonconvex composite optimization problems. Each iteration of AS-PAL inexactly solves a possibly nonconvex proximal augmented Lagrangian (AL) subproblem obtained by an aggressive/adaptive choice of prox stepsize with the aim of substantially improving its computational performance followed by a full Lagrangian … Read more

Faster Lagrangian-based methods: a unified prediction-correction framework

Motivated by the prediction-correction framework constructed by He and Yuan [SIAM J. Numer. Anal. 50: 700-709, 2012], we propose a unified prediction-correction framework to accelerate Lagrangian-based methods. More precisely, for strongly convex optimization, general linearized Lagrangian method with indefinite proximal term, alternating direction method of multipliers (ADMM) with the step size of Lagrangian multiplier not … Read more

Partitioning through projections: strong SDP bounds for large graph partition problems

The graph partition problem (GPP) aims at clustering the vertex set of a graph into a fixed number of disjoint subsets of given sizes such that the sum of weights of edges joining different sets is minimized. This paper investigates the quality of doubly nonnegative (DNN) relaxations, i.e., relaxations having matrix variables that are both … Read more

A novel sequential optimality condition for smooth constrained optimization and algorithmic consequences

In the smooth constrained optimization setting, this work introduces the Domain Complementary Approximate Karush-Kuhn-Tucker (DCAKKT) condition, inspired by a sequential optimality condition recently devised for nonsmooth constrained optimization problems. It is shown that the augmented Lagrangian method can generate limit points satisfying DCAKKT, and it is proved that such a condition is not related to … Read more

Global Convergence of Augmented Lagrangian Method Applied to Mathematical Program with Switching Constraints

The mathematical program with switching constraints (MPSC) is a kind of problems with disjunctive constraints. The existing convergence results cannot directly be applied to this kind of problem since the required constraint qualifications for ensuring the convergence are very likely to fail. In this paper, we apply the augmented Lagrangian method (ALM) to solve the … Read more

Global Complexity Bound of a Proximal ADMM for Linearly-Constrained Nonseperable Nonconvex Composite Programming

This paper proposes and analyzes a dampened proximal alternating direction method of multipliers (DP.ADMM) for solving linearly-constrained nonconvex optimization problems where the smooth part of the objective function is nonseparable. Each iteration of DP.ADMM consists of: (ii) a sequence of partial proximal augmented Lagrangian (AL) updates, (ii) an under-relaxed Lagrange multiplier update, and (iii) a … Read more

Dual descent ALM and ADMM

Classical primal-dual algorithms attempt to solve $\max_{\mu}\min_{x} \mathcal{L}(x,\mu)$ by alternatively minimizing over the primal variable $x$ through primal descent and maximizing the dual variable $\mu$ through dual ascent. However, when $\mathcal{L}(x,\mu)$ is highly nonconvex with complex constraints in $x$, the minimization over $x$ may not achieve global optimality, and hence the dual ascent step loses … Read more

A New Insight on Augmented Lagrangian Method with Applications in Machine Learning

By exploiting double-penalty terms for the primal subproblem, we develop a novel relaxed augmented Lagrangian method for solving a family of convex optimization problems subject to equality or inequality constraints. This new method is then extended to solve a general multi-block separable convex optimization problem, and two related primal-dual hybrid gradient algorithms are also discussed. … Read more

Algorithms for Difference-of-Convex (DC) Programs Based on Difference-of-Moreau-Envelopes Smoothing

In this paper we consider minimization of a difference-of-convex (DC) function with and without linear constraints. We first study a smooth approximation of a generic DC function, termed difference-of-Moreau-envelopes (DME) smoothing, where both components of the DC function are replaced by their respective Moreau envelopes. The resulting smooth approximation is shown to be Lipschitz differentiable, … Read more