A 4-steps elementary proof of existence of Lagrange multipliers
We present a simplified proof of Lagrange’s theorem using only elementary properties of sets and sequences. ArticleDownload View PDF
We present a simplified proof of Lagrange’s theorem using only elementary properties of sets and sequences. ArticleDownload View PDF
Consider composite nonconvex optimization problems where the objective function consists of a smooth nonconvex term (with Lipschitz-continuous gradient) and a convex (possibly nonsmooth) term. Existing parameter-free methods for such problems often rely on complex multi-loop structures, require line searches, or depend on restrictive assumptions (e.g., bounded iterates). To address these limitations, we introduce a novel … Read more
We present a GPU implementation of Algorithm NCL, an augmented Lagrangian method for solving large-scale and degenerate nonlinear programs. Although interior-point methods and sequential quadratic programming are widely used for solving nonlinear programs, the augmented Lagrangian method is known to offer superior robustness against constraint degeneracies and can rapidly detect infeasibility. We introduce several enhancements … Read more
An algorithm is proposed, analyzed, and tested for solving continuous nonlinear-equality-constrained optimization problems where the constraints are defined by an expectation or an average over a large (finite) number of terms. The main idea of the algorithm is to solve a sequence of equality-constrained problems, each involving a finite sample of constraint-function terms, over which … Read more
In this paper, we address composite optimization problems on Hadamard manifolds, where the objective function is given by the sum of a smooth term (not necessarily convex) and a convex term (not necessarily differentiable). To solve this problem, we develop a proximal gradient method defined directly on the manifold, employing a strategy that enforces monotonicity … Read more
Identifying active constraints from a point near an optimal solution is important both theoretically and practically in constrained continuous optimization, as it can help identify optimal Lagrange multipliers and essentially reduces an inequality-constrained problem to an equality-constrained one. Traditional active-set identification guarantees have been proved under assumptions of smoothness and constraint qualifications, and assume exact … Read more
We study optimization over Riemannian embedded submanifolds, where the objective function is relatively smooth in the ambient Euclidean space. Such problems have broad applications but are still largely unexplored. We introduce two Riemannian first-order methods, namely the retraction-based and projection-based Riemannian Bregman gradient methods, by incorporating the Bregman distance into the update steps. The retraction-based … Read more
This work investigates global convergence properties of a safeguarded augmented Lagrangian method applied to nonlinear programming problems, with an emphasis on the role of constraint qualifications in ensuring boundedness of the Lagrange multiplier estimates, also known as dual sequences. When functions with locally Lipschitz continuous derivatives define the constraint set, we prove that the Error … Read more
In this paper we introduce a General Dynamic String-Averaging (GDSA) iterative scheme and investigate its convergence properties in the inconsistent case, that is, when the input operators don’t have a common fixed point. The Dynamic String-Averaging Projection (DSAP) algorithm itself was introduced in an 2013 paper, where its strong convergence and bounded perturbation resilience were … Read more
In this paper, we study a class of stochastic and finite-sum convex optimization problems with deterministic constraints. Existing methods typically aim to find an \(\epsilon\)-expectedly feasible stochastic optimal solution, in which the expected constraint violation and expected optimality gap are both within a prescribed tolerance ϵ. However, in many practical applications, constraints must be nearly … Read more