Error Bound and Isocost Imply Linear Convergence of DCA-based Algorithms to D-stationarity

We consider a class of structured nonsmooth difference-of-convex minimization, which can be written as the difference of two convex functions possibly nonsmooth with the second one in the format of the maximum of a finite convex smooth functions. We propose two extrapolation proximal difference-of-convex based algorithms for potential acceleration to converge to a weak/standard d-stationary point of the structured nonsmooth problem, and prove its linear convergence of these algorithms under the assumptions of {\it piecewise} error bound and {\it piecewise} isocost condition. As a product, we refine the linear convergence analysis of sDCA and $\varepsilon$-DCA in a recent work of Dong and Tao (J. Optim. Theory Appl. 189, 190-220, 2021) by removing the assumption of locally linear regularity regarding the intersection of certain stationary sets and dominance regions. We also discuss sufficient conditions to guarantee these assumptions and illustrate that several sparse learning models satisfy all these assumptions. Finally, we conduct some elementary numerical simulations on sparse recovery to verify the theoretical results empirically.

Article

Download

View Error Bound and Isocost Imply Linear Convergence of DCA-based Algorithms to D-stationarity