A relaxed quasinormality condition and the boundedness of dual augmented Lagrangian sequences

Global convergence of augmented Lagrangian methods to a first-order stationary point is well-known to hold under considerably weak constraint qualifications. In particular, several constant rank-type conditions have been introduced for this purpose which turned out to be relevant also beyond this scope. In this paper we show that in fact under these conditions the sequence … Read more

Safeguarded augmented Lagrangian algorithms with scaled stopping criterion for the subproblems

At each iteration of the Safeguarded Augmented Lagrangian algorithm Algencan, a bound-constrained subproblem consisting of the minimization of the Powell-Hestenes-Rockafellar augmented Lagrangian function is considered, for which a minimizer with tolerance tending to zero is sought. More precisely, a point that satisfies a subproblem first-order necessary optimality condition with tolerance tending to zero is required. … Read more

Solving low-rank semidefinite programs via manifold optimization

We propose a manifold optimization approach to solve linear semidefinite programs (SDP) with low-rank solutions. This approach incorporates the augmented Lagrangian method and the Burer-Monteiro factorization, and features the adaptive strategies for updating the factorization size and the penalty parameter. We prove that the present algorithm can solve SDPs to global optimality, despite of the … Read more

A Newton-CG based barrier-augmented Lagrangian method for general nonconvex conic optimization

\(\) In this paper we consider finding an approximate second-order stationary point (SOSP) of general nonconvex conic optimization that minimizes a twice differentiable function subject to nonlinear equality constraints and also a convex conic constraint. In particular, we propose a Newton-conjugate gradient (Newton-CG) based barrier-augmented Lagrangian method for finding an approximate SOSP of this problem. … Read more

A Newton-CG based augmented Lagrangian method for finding a second-order stationary point of nonconvex equality constrained optimization with complexity guarantees

\(\) In this paper we consider finding a second-order stationary point (SOSP) of nonconvex equality constrained optimization when a nearly feasible point is known. In particular, we first propose a new Newton-CG method for finding an approximate SOSP of unconstrained optimization and show that it enjoys a substantially better complexity than the Newton-CG method [56]. … Read more

A first-order augmented Lagrangian method for constrained minimax optimization

\(\) In this paper we study a class of constrained minimax problems. In particular, we propose a first-order augmented Lagrangian method for solving them, whose subproblems turn out to be a much simpler structured minimax problem and are suitably solved by a first-order method recently developed in [26] by the authors. Under some suitable assumptions, … Read more

On enhanced KKT optimality conditions for smooth nonlinear optimization

The Fritz-John (FJ) and KKT conditions are fundamental tools for characterizing minimizers and form the basis of almost all methods for constrained optimization. Since the seminal works of Fritz John, Karush, Kuhn and Tucker, FJ/KKT conditions have been enhanced by adding extra necessary conditions. Such an extension was initially proposed by Hestenes in the 1970s … Read more

An adaptive superfast inexact proximal augmented Lagrangian method for smooth nonconvex composite optimization problems

This work presents an adaptive superfast proximal augmented Lagrangian (AS-PAL) method for solving linearly-constrained smooth nonconvex composite optimization problems. Each iteration of AS-PAL inexactly solves a possibly nonconvex proximal augmented Lagrangian (AL) subproblem obtained by an aggressive/adaptive choice of prox stepsize with the aim of substantially improving its computational performance followed by a full Lagrangian … Read more

Faster Lagrangian-based methods: a unified prediction-correction framework

Motivated by the prediction-correction framework constructed by He and Yuan [SIAM J. Numer. Anal. 50: 700-709, 2012], we propose a unified prediction-correction framework to accelerate Lagrangian-based methods. More precisely, for strongly convex optimization, general linearized Lagrangian method with indefinite proximal term, alternating direction method of multipliers (ADMM) with the step size of Lagrangian multiplier not … Read more

Partitioning through projections: strong SDP bounds for large graph partition problems

The graph partition problem (GPP) aims at clustering the vertex set of a graph into a fixed number of disjoint subsets of given sizes such that the sum of weights of edges joining different sets is minimized. This paper investigates the quality of doubly nonnegative (DNN) relaxations, i.e., relaxations having matrix variables that are both … Read more