A Two-level ADMM Algorithm for AC OPF with Convergence Guarantees

This paper proposes a two-level distributed algorithmic framework for solving the AC optimal power flow (OPF) problem with convergence guarantees. The presence of highly nonconvex constraints in OPF poses significant challenges to distributed algorithms based on the alternating direction method of multipliers (ADMM). In particular, convergence is not provably guaranteed for nonconvex network optimization problems … Read more

On scaled stopping criteria for a safeguarded augmented Lagrangian method with theoretical guarantees

This paper discusses the use of a stopping criterion based on the scaling of the Karush-Kuhn-Tucker (KKT) conditions by the norm of the approximate Lagrange multiplier in the ALGENCAN implementation of a safeguarded augmented Lagrangian method. Such stopping criterion is already used in several nonlinear programming solvers, but it has not yet been considered in … Read more

On the best achievable quality of limit points of augmented Lagrangian schemes

The optimization literature is vast in papers dealing with improvements on the global convergence of augmented Lagrangian schemes. Usually, the results are based on weak constraint qualifications, or, more recently, on sequential optimality conditions obtained via penalization techniques. In this paper we propose a somewhat different approach, in the sense that the algorithm itself is … Read more

Exact Penalty Function for L21 Norm Minimization over the Stiefel Manifold

L21 norm minimization with orthogonality constraints, feasible region of which is called Stiefel manifold, has wide applications in statistics and data science. The state-of-the-art approaches adopt proximal gradient technique on either Stiefel manifold or its tangent spaces. The consequent subproblem does not have closed-form solution and hence requires an iterative procedure to solve which is … Read more

On the use of Jordan Algebras for improving global convergence of an Augmented Lagrangian method in nonlinear semidefinite programming

Jordan Algebras are an important tool for dealing with semidefinite programming and optimization over symmetric cones in general. In this paper, a judicious use of Jordan Algebras in the context of sequential optimality conditions is done in order to generalize the global convergence theory of an Augmented Lagrangian method for nonlinear semidefinite programming. An approximate … Read more

On the convergence of augmented Lagrangian strategies for nonlinear programming

Augmented Lagrangian algorithms are very popular and successful methods for solving constrained optimization problems. Recently, the global convergence analysis of these methods have been dramatically improved by using the notion of the sequential optimality conditions. Such conditions are optimality conditions independently of the fulfilment of any constraint qualifications and provide theoretical tools to justify stopping … Read more

A Class of Smooth Exact Penalty Function Methods for Optimization Problems with Orthogonality Constraints

Updating the augmented Lagrangian multiplier by closed-form expression yields efficient first-order infeasible approach for optimization problems with orthogonality constraints. Hence, parallelization becomes tractable in solving this type of problems. Inspired by this closed-form updating scheme, we propose an exact penalty function model with compact convex constraints (PenC). We show that PenC can act as an … Read more

A parallel splitting ALM-based algorithm for separable convex programming

The augmented Lagrangian method (ALM) provides a benchmark for tackling the canonical convex minimization problem with linear constraints. We consider a special case where the objective function is the sum of $m$ individual subfunctions without coupled variables. The recent study reveals that the direct extension of ALM for separable convex programming problems is not necessarily … Read more

Optimality conditions for nonlinear second-order cone programming and symmetric cone programming

Nonlinear symmetric cone programming (NSCP) generalizes important optimization problems such as nonlinear programming, nonlinear semidefinite programming and nonlinear second-order cone programming (NSOCP). In this work, we present two new optimality conditions for NSCP without constraint qualifications, which implies the Karush-Kuhn-Tucker conditions under a condition weaker than Robinson’s constraint qualification. In addition, we show the relationship … Read more

A sparse semismooth Newton based augmented Lagrangian method for large-scale support vector machines

Support vector machines (SVMs) are successful modeling and prediction tools with a variety of applications. Previous work has demonstrated the superiority of the SVMs in dealing with the high dimensional, low sample size problems. However, the numerical difficulties of the SVMs will become severe with the increase of the sample size. Although there exist many … Read more