A 4-steps elementary proof of existence of Lagrange multipliers
We present a simplified proof of Lagrange’s theorem using only elementary properties of sets and sequences. ArticleDownload View PDF
We present a simplified proof of Lagrange’s theorem using only elementary properties of sets and sequences. ArticleDownload View PDF
This work investigates global convergence properties of a safeguarded augmented Lagrangian method applied to nonlinear programming problems, with an emphasis on the role of constraint qualifications in ensuring boundedness of the Lagrange multiplier estimates, also known as dual sequences. When functions with locally Lipschitz continuous derivatives define the constraint set, we prove that the Error … Read more
This tutorial provides an overview of the current state-of-the-art in the sensitivity analysis for nonlinear programming. Building upon the fundamental work of Fiacco, it derives the sensitivity of primal-dual solutions for regular nonlinear programs and explores the extent to which Fiacco’s framework can be extended to degenerate nonlinear programs with non-unique dual solutions. The survey … Read more
Derivative-based iterative methods for nonlinearly constrained non-convex optimization usually share common algorithmic components, such as strategies for computing a descent direction and mechanisms that promote global convergence. Based on this observation, we introduce an abstract framework based on four common ingredients that describes most derivative-based iterative methods and unifies their workflows. We then present Uno, … Read more
We first consider the convex composite optimization models with the local Lipschitzness condition imposed on the gradient of the differentiable term. The classical proximal gradient method will be studied with our novel enhanced adaptive stepsize selection. To obtain the convergence of the proposed algorithm, we establish a sufficient decrease type inequality associated with our new … Read more
In [R. J. Baraldi and D. P. Kouri, Mathematical Programming, (2022), pp. 1-40], we introduced an inexact trust-region algorithm for minimizing the sum of a smooth nonconvex and nonsmooth convex function. The principle expense of this method is in computing a trial iterate that satisfies the so-called fraction of Cauchy decrease condition—a bound that ensures … Read more
In this paper, we propose a novel stepsize for the classical gradient descent scheme to solve unconstrained nonlinear optimization problems. We are concerned with the convex and smooth objective without the globally Lipschitz gradient condition. Our new method just needs the locally Lipschitz gradient but still gets the rate $O(\frac{1}{k})$ of $f(x^k)-f_*$ at most. By … Read more
Focusing on smooth constrained optimization problems, and inspired by the complementary approximate Karush-Kuhn-Tucker (CAKKT) conditions, this work introduces the weighted complementary Approximate Karush-Kuhn-Tucker (WCAKKT) conditions. They are shown to be verified not only by safeguarded augmented Lagrangian methods, but also by inexact restoration methods, inverse and logarithmic barrier methods, and a penalized algorithm for constrained … Read more
In this paper, we propose a variant of the reduced Jacobian method (RJM) introduced by El Maghri and Elboulqe in [JOTA, 179 (2018) 917–943] for multicriteria optimization under linear constraints. Motivation is that, contrarily to RJM which has only global convergence to Pareto KKT-stationary points in the classical sense of accumulation points, this new variant … Read more
In this paper, we consider lasso problems with zero-sum constraint, commonly required for the analysis of compositional data in high-dimensional spaces. A novel algorithm is proposed to solve these problems, combining a tailored active-set technique, to identify the zero variables in the optimal solution, with a 2-coordinate descent scheme. At every iteration, the algorithm chooses … Read more