Weakly convex Douglas-Rachford splitting avoids strict saddle points

We prove that the Douglas-Rachford splitting method converges, almost surely, to local minimizers of semialgebraic weakly convex optimization problems, under the assumption of the strict saddle property. The approach consists of two steps: first, we prove a manifold identification result, and local smoothness of the involved iteration operator. Then, we proceed to show that strict … Read more

Weak convexity and approximate subdifferentials

We explore and construct an enlarged subdifferential for weakly convex functions. The resulting object turns out to be continuous with respect to both the function argument and the enlargement parameter. We carefully analyze connections with other constructs in the literature and extend well-known variational principles to the weakly convex setting. By resorting to the new … Read more

A unified analysis of descent sequences in weakly convex optimization, including convergence rates for bundle methods

We present a framework for analyzing convergence and local rates of convergence of a class of descent algorithms, assuming the objective function is weakly convex. The framework is general, in the sense that it combines the possibility of explicit iterations (based on the gradient or a subgradient at the current iterate), implicit iterations (using a … Read more