A subgradient splitting algorithm for optimization on nonpositively curved metric spaces

Many of the primal ingredients of convex optimization extend naturally from Euclidean to Hadamard spaces — nonpositively curved metric spaces like Euclidean, Hilbert, and hyperbolic spaces, metric trees, and more general CAT(0) cubical complexes. Linear structure, however, and the duality theory it supports are absent. Nonetheless, we introduce a new type of subgradient for convex … Read more

Understanding the Douglas-Rachford splitting method through the lenses of Moreau-type envelopes

We analyze the Douglas-Rachford splitting method for weakly convex optimization problems, by the token of the Douglas-Rachford envelope, a merit function akin to the Moreau envelope. First, we use epi-convergence techniques to show that this artifact approximates the original objective function via epigraphs. Secondly, we present how global convergence and local linear convergence rates for … Read more

Extending the Reach of First-Order Algorithms for Nonconvex Min-Max Problems with Cohypomonotonicity

We focus on constrained, \(L\)-smooth, nonconvex-nonconcave min-max problems either satisfying \(\rho\)-cohypomonotonicity or admitting a solution to the \(\rho\)-weakly Minty Variational Inequality (MVI), where larger values of the parameter \(\rho>0\) correspond to a greater degree of nonconvexity. These problem classes include examples in two player reinforcement learning, interaction dominant min-max problems, and certain synthetic test problems on … Read more

Weakly convex Douglas-Rachford splitting avoids strict saddle points

We prove that the Douglas-Rachford splitting method converges, almost surely, to local minimizers of semialgebraic weakly convex optimization problems, under the assumption of the strict saddle property. The approach consists of two steps: first, we prove a manifold identification result, and local smoothness of the involved iteration operator. Then, we proceed to show that strict … Read more

Weak convexity and approximate subdifferentials

We explore and construct an enlarged subdifferential for weakly convex functions. The resulting object turns out to be continuous with respect to both the function argument and the enlargement parameter. We carefully analyze connections with other constructs in the literature and particularize to the weakly convex setting well-known variational principles. By resorting to the new … Read more

Convergence of the Chambolle–Pock Algorithm in the Absence of Monotonicity

The Chambolle-Pock algorithm (CPA), also known as the primal-dual hybrid gradient method (PDHG), has surged in popularity in the last decade due to its success in solving convex/monotone structured problems. This work provides convergence results for problems with varying degrees of (non)monotonicity, quantified through a so-called oblique weak Minty condition on the associated primal-dual operator. … Read more

Range of the displacement operator of PDHG with applications to quadratic and conic programming

Primal-dual hybrid gradient (PDHG) is a first-order method for saddle-point problems and convex programming introduced by Chambolle and Pock. Recently, Applegate et al. analyzed the behavior of PDHG when applied to an infeasible or unbounded instance of linear programming, and in particular, showed that PDHG is able to diagnose these conditions. Their analysis hinges on … Read more

Convexity and continuity of specific set-valued maps and their extremal value functions

In this paper, we study several classes of set-valued maps, which can be used in set-valued optimization and its applications, and their respective maximum and minimum value functions. The definitions of these maps are based on scalar-valued, vector-valued, and cone-valued maps. Moreover, we consider those extremal value functions which are obtained when optimizing linear functionals … Read more

On convexity and quasiconvexity of extremal value functions in set optimization

We study different classes of convex and quasiconvex set-valued maps defined by means of the lower-less order relation and the upper-less order relation. The aim of this paper is to formulate necessary and especially sufficient conditions for the convexity/quasiconvexity of extremal value functions. CitationDOI: 10.23952/asvao.3.2021.3.04ArticleDownload View PDF

Calculating Optimistic Likelihoods Using (Geodesically) Convex Optimization

A fundamental problem arising in many areas of machine learning is the evaluation of the likelihood of a given observation under different nominal distributions. Frequently, these nominal distributions are themselves estimated from data, which makes them susceptible to estimation errors. We thus propose to replace each nominal distribution with an ambiguity set containing all distributions … Read more