A Riemannian AdaGrad-Norm Method

We propose a manifold AdaGrad-Norm method (\textsc{MAdaGrad}), which extends the norm version of AdaGrad (AdaGrad-Norm) to Riemannian optimization. In contrast to line-search schemes, which may require several exponential map computations per iteration, \textsc{MAdaGrad} requires only one. Assuming the objective function $f$ has Lipschitz continuous Riemannian gradient, we show that the method requires at most $\mathcal{O}(\varepsilon^{-2})$ … Read more

Polyconvex double well functions

We investigate polyconvexity of the double well function $f(X) := |X-X_1|^2|X-X_2|^2$ for given matrices $X_1, X_2 \in \R^{n \times n}$. Such functions are fundamental in the modeling of phase transitions in materials, but their non-convex nature presents challenges for the analysis of variational problems. We prove that $f$ is polyconvex if and only if the … Read more

Preconditioning for rational approximation

In this paper, we show that minimax rational approximations can be enhanced by introducing a controlling parameter on the denominator of the rational function. This is implemented by adding a small set of linear constraints to the underlying optimization problem. The modification integrates naturally into approximation models formulated as linear programming problems. We demonstrate our … Read more

A subgradient splitting algorithm for optimization on nonpositively curved metric spaces

Many of the primal ingredients of convex optimization extend naturally from Euclidean to Hadamard spaces — nonpositively curved metric spaces like Euclidean, Hilbert, and hyperbolic spaces, metric trees, and more general CAT(0) cubical complexes. Linear structure, however, and the duality theory it supports are absent. Nonetheless, we introduce a new type of subgradient for convex … Read more

Understanding the Douglas-Rachford splitting method through the lenses of Moreau-type envelopes

We analyze the Douglas-Rachford splitting method for weakly convex optimization problems, by the token of the Douglas-Rachford envelope, a merit function akin to the Moreau envelope. First, we use epi-convergence techniques to show that this artifact approximates the original objective function via epigraphs. Secondly, we present how global convergence and local linear convergence rates for … Read more

Extending the Reach of First-Order Algorithms for Nonconvex Min-Max Problems with Cohypomonotonicity

We focus on constrained, \(L\)-smooth, nonconvex-nonconcave min-max problems either satisfying \(\rho\)-cohypomonotonicity or admitting a solution to the \(\rho\)-weakly Minty Variational Inequality (MVI), where larger values of the parameter \(\rho>0\) correspond to a greater degree of nonconvexity. These problem classes include examples in two player reinforcement learning, interaction dominant min-max problems, and certain synthetic test problems on … Read more

Weakly convex Douglas-Rachford splitting avoids strict saddle points

We prove that the Douglas-Rachford splitting method converges, almost surely, to local minimizers of semialgebraic weakly convex optimization problems, under the assumption of the strict saddle property. The approach consists of two steps: first, we prove a manifold identification result, and local smoothness of the involved iteration operator. Then, we proceed to show that strict … Read more

Weak convexity and approximate subdifferentials

We explore and construct an enlarged subdifferential for weakly convex functions. The resulting object turns out to be continuous with respect to both the function argument and the enlargement parameter. We carefully analyze connections with other constructs in the literature and particularize to the weakly convex setting well-known variational principles. By resorting to the new … Read more

Convergence of the Chambolle–Pock Algorithm in the Absence of Monotonicity

The Chambolle-Pock algorithm (CPA), also known as the primal-dual hybrid gradient method (PDHG), has surged in popularity in the last decade due to its success in solving convex/monotone structured problems. This work provides convergence results for problems with varying degrees of (non)monotonicity, quantified through a so-called oblique weak Minty condition on the associated primal-dual operator. … Read more

Range of the displacement operator of PDHG with applications to quadratic and conic programming

Primal-dual hybrid gradient (PDHG) is a first-order method for saddle-point problems and convex programming introduced by Chambolle and Pock. Recently, Applegate et al. analyzed the behavior of PDHG when applied to an infeasible or unbounded instance of linear programming, and in particular, showed that PDHG is able to diagnose these conditions. Their analysis hinges on … Read more