A stochastic Lagrangian-based method for nonconvex optimization with nonlinear constraints

The Augmented Lagrangian Method (ALM) is one of the most common approaches for solving linear and nonlinear constrained problems. However, for non-convex objectives, handling non-linear inequality constraints remains challenging. In this paper, we propose a stochastic ALM with Backtracking Line Search that performs on a subset (mini-batch) of randomly selected points for the solving of nonconvex problems. The considered class of problems include both nonlinear equality and inequality constraints. Together with the formal proof of the convergence properties (in expectation) of the proposed algorithm and its computational complexity, the performance of the proposed algorithm are then numerically compared against both exact and inexact state-of-the-art ALM methods. Further, we apply the proposed stochastic ALM method to solve a multi-constrained network design problem. We perform extensive numerical executions on a set of instances extracted from the SNDlib to study its behavior and performance, as well as potential improvements of this method. Then analysis and comparison of the results against those obtained by extending the method developed in [Contardo2021] to nonlinear constraints are provided for the approximation of separable nonconvex optimization programs.

Article

Download

View PDF