Weakly convex Douglas-Rachford splitting avoids strict saddle points

We prove that the Douglas-Rachford splitting method converges, almost surely, to local minimizers of semialgebraic weakly convex optimization problems, under the assumption of the strict saddle property. The approach consists of two steps: first, we prove a manifold identification result, and local smoothness of the involved iteration operator. Then, we proceed to show that strict saddle points are unstable fixed points of such operator, and thus the dynamics escape critical points of negative curvature. In this manner, Douglas-Ranchford splitting joins the family of simple algorithms that avoid saddle points, such as some first-order and proximal-type methods, including a close relative, the forward-backward splitting method.

Article

Download

View Weakly convex Douglas-Rachford splitting avoids strict saddle points