On the fulfillment of the complementary approximate Karush-Kuhn-Tucker conditions and algorithmic applications

Focusing on smooth constrained optimization problems, and inspired by the complementary approximate Karush-Kuhn-Tucker (CAKKT) conditions, this work introduces the weighted complementary Approximate Karush-Kuhn-Tucker (WCAKKT) conditions. They are shown to be verified not only by safeguarded augmented Lagrangian methods, but also by inexact restoration methods, inverse and logarithmic barrier methods, and a penalized algorithm for constrained … Read more

A novel sequential optimality condition for smooth constrained optimization and algorithmic consequences

In the smooth constrained optimization setting, this work introduces the Domain Complementary Approximate Karush-Kuhn-Tucker (DCAKKT) condition, inspired by a sequential optimality condition recently devised for nonsmooth constrained optimization problems. It is shown that the augmented Lagrangian method can generate limit points satisfying DCAKKT, and it is proved that such a condition is not related to … Read more

A New Sequential Optimality Condition for Constrained Nonsmooth Optimization

We introduce a sequential optimality condition for locally Lipschitz constrained nonsmooth optimization, verifiable just using derivative information, and which holds even in the absence of any constraint qualification. The proposed sequential optimality condition is not only novel for nonsmooth problems, but brings new insights for the smooth case as well. We present a practical algorithm … Read more

Gradient Sampling Methods for Nonsmooth Optimization

This paper reviews the gradient sampling methodology for solving nonsmooth, nonconvex optimization problems. An intuitively straightforward gradient sampling algorithm is stated and its convergence properties are summarized. Throughout this discussion, we emphasize the simplicity of gradient sampling as an extension of the steepest descent method for minimizing smooth objectives. We then provide overviews of various … Read more

On the local convergence analysis of the Gradient Sampling method

The Gradient Sampling method is a recently developed tool for solving unconstrained nonsmooth optimization problems. Using just first order information about the objective function, it generalizes the steepest descent method, one of the most classical methods to minimize a smooth function. This manuscript aims at determining under which circumstances one can expect the same local … Read more

A Second-Order Information-Based Gradient and Function Sampling Method for Nonconvex, Nonsmooth Optimization

This paper has the goal to propose a gradient and function sampling method that under special circumstances moves superlinearly to a minimizer of a general class of nonsmooth and nonconvex functions. We present global and local convergence theory with illustrative examples that corroborate and elucidate the theoretical results obtained along the manuscript. Article Download View … Read more

A Nonmonotone Approach without Differentiability Test for Gradient Sampling Methods

Recently, optimization problems involving nonsmooth and locally Lipschitz functions have been subject of investigation, and an innovative method known as Gradient Sampling has gained attention. Although the method has shown good results for important real problems, some drawbacks still remain unexplored. This study suggests modifications to the gradient sampling class of methods in order to … Read more