The cost of nonconvexity in deterministic nonsmooth optimization

\(\)

We study the impact of nonconvexity on the complexity of nonsmooth optimization, emphasizing objectives such as piecewise linear functions, which may not be weakly convex. We focus on a dimension-independent analysis, slightly modifying a black-box algorithm of Zhang et al. that approximates an $\epsilon$-stationary point of any directionally differentiable Lipschitz objective using $O(\epsilon^{-4})$ calls to a specialized subgradient oracle and a randomized line search. Our simple black-box deterministic version, achieves $O(\epsilon^{-5})$ for any difference-of-convex objective, and $O(\epsilon^{-4})$ for the weakly convex case. Our complexity bound depends on a natural nonconvexity modulus, related, intriguingly, to the negative part of directional second derivatives of the objective, understood in the distributional sense.

Article

Download

View PDF