On the Linear Convergence to Weak/Standard D-stationary Points of DCA-based Algorithms for Structured Nonsmooth DC Programming

We consider a class of structured nonsmooth difference-of-convex minimization. We allow nonsmoothness in both the convex and concave components in the objective function, with a finite max structure in the concave part. Our focus is on algorithms that compute a (weak or standard) d(irectional)-stationary point as advocated in a recent work of Pang et al. in 2017. Our linear convergence results are based on direct generalizations of the assumptions of error bounds and separation of isocost surfaces proposed in the seminal work of Luo et al. in 1993, as well as one additional assumption of locally linear regularity regarding the intersection of certain stationary sets and dominance regions. An interesting by-product is to present a sharper characterization of the limit set of the basic algorithm proposed by Pang et. al., which fits between d-stationarity and global optimality. We also discuss sufficient conditions under which these assumptions hold. Finally, we provide several realistic and nontrivial statistical learning models where all assumptions hold.

Article

Download

View On the Linear Convergence to Weak/Standard D-stationary Points of DCA-based Algorithms for Structured Nonsmooth DC Programming