A relaxed version of Ryu’s three-operator splitting method for structured nonconvex optimization

In this work, we propose a modification of Ryu’s splitting algorithm for minimizing the sum of three functions, where two of them are convex with Lipschitz continuous gradients, and the third is an arbitrary proper closed function that is not necessarily convex. The modification is essential to facilitate the convergence analysis, particularly in establishing a … Read more

A note on asynchronous Projective Splitting in Julia

While it has been mathematically proven that Projective Splitting (PS) algorithms can converge in parallel and distributed computing settings, to-date, it appears there were no open-source implementations of the full algorithm with asynchronous computing capabilities. This note fills this gap by providing a Julia implementation of the asynchronous PS algorithm of Eckstein and Combettes for … Read more

NonOpt: Nonconvex, Nonsmooth Optimizer

NonOpt, a C++ software package for minimizing locally Lipschitz objective functions, is presented. The software is intended primarily for minimizing objective functions that are nonconvex and/or nonsmooth. The package has implementations of two main algorithmic strategies: a gradient-sampling and a proximal-bundle method. Each algorithmic strategy can employ quasi-Newton techniques for accelerating convergence in practice. The … Read more

The Least Singular Value Function in Variational Analysis

Metric regularity is among the central concepts of nonlinear and variational analysis, constrained optimization, and their numerous applications. However, met- ric regularity can be elusive for some important ill-posed classes of problems includ- ing polynomial equations, parametric variational systems, smooth reformulations of complementarity systems with degenerate solutions, etc. The study of stability issues for such … Read more

New Sufficient and Necessary Conditions for Constrained and Unconstrained Lipschitzian Error Bounds

Local error bounds play a fundamental role in mathematical programming and variational analysis. They are used e.g. as constraint qualifications in optimization, in developing calculus rules for generalized derivatives in nonsmooth and set-valued analysis, and they serve as a key ingredient in the design and convergence analysis of Newton-type methods for solving systems of possibly … Read more

A Universally Optimal Primal-Dual Method for Minimizing Heterogeneous Compositions

This paper proposes a universal, optimal algorithm for convex minimization problems of the composite form $g_0(x)+h(g_1(x),\dots, g_m(x)) + u(x)$. We allow each $g_j$ to independently range from being nonsmooth Lipschitz to smooth, from convex to strongly convex, described by notions of H\”older continuous gradients and uniform convexity. Note that, although the objective is built from … Read more

On Sum-Rules for Second-Order Contingent Derivatives

We are concerned with contingent derivatives and their second-order counterparts (introduced by Ngai et al.) of set-valued mappings. Special attention is given to the development of new sum-rules for second-order contingent derivatives. To be precise, we want to find conditions under which the second-order contingent derivative of the sum of a smooth and a set-valued … Read more

A new problem qualification based on approximate KKT conditions for Lipschitzian optimization with application to bilevel programming

When dealing with general Lipschitzian optimization problems, there are many problem classes where even weak constraint qualications fail at local minimizers. In contrast to a constraint qualification, a problem qualification does not only rely on the constraints but also on the objective function to guarantee that a local minimizer is a Karush-Kuhn-Tucker (KKT) point. For … Read more

Relaxation methods for pessimistic bilevel optimization

We consider a smooth pessimistic bilevel optimization problem, where the lower-level problem is convex and satisfies the Slater constraint qualification. These assumptions ensure that the Karush-Kuhn-Tucker (KKT) reformulation of our problem is well-defined. We then introduce and study the (i) Scholtes, (ii) Lin and Fukushima, (iii) Kadrani, Dussault and Benchakroun, (iv) Steffensen and Ulbrich, and … Read more

Convergence of Descent Optimization Algorithms under Polyak-Lojasiewicz-Kurdyka Conditions

This paper develops a comprehensive convergence analysis for generic classes of descent algorithms in nonsmooth and nonconvex optimization under several conditions of the Polyak-Lojasiewicz-Kurdyka (PLK) type. Along other results, we prove the finite termination of generic algorithms under the PLK conditions with lower exponents. Specifications are given to establish new convergence rates for inexact reduced … Read more