Weakly convex Douglas-Rachford splitting avoids strict saddle points

We prove that the Douglas-Rachford splitting method converges, almost surely, to local minimizers of semialgebraic weakly convex optimization problems, under the assumption of the strict saddle property. The approach consists of two steps: first, we prove a manifold identification result, and local smoothness of the involved iteration operator. Then, we proceed to show that strict … Read more

A nearly linearly convergent first-order method for nonsmooth functions with quadratic growth

Classical results show that gradient descent converges linearly to minimizers of smooth strongly convex functions. A natural question is whether there exists a locally nearly linearly convergent method for nonsmooth functions with quadratic growth. This work designs such a method for a wide class of nonsmooth and nonconvex locally Lipschitz functions, including max-of-smooth, Shapiro’s decomposable … Read more

Stochastic subgradient method converges on tame functions

This work considers the question: what convergence guarantees does the stochastic subgradient method have in the absence of smoothness and convexity? We prove that the stochastic subgradient method, on any semialgebraic locally Lipschitz function, produces limit points that are all first-order stationary. More generally, our result applies to any function with a Whitney stratifiable graph. … Read more

Continuity of set-valued maps revisited in the light of tame geometry

Continuity of set-valued maps is hereby revisited: after recalling some basic concepts of variational analysis and a short description of the State-of-the-Art, we obtain as by-product two Sard type results concerning local minima of scalar and vector valued functions. Our main result though, is inscribed in the framework of tame geometry, stating that a closed-valued … Read more