In this paper we consider the linear convergence of algorithms for minimizing difference- of-convex functions with convex constraints. We allow nonsmoothness in both of the convex and concave components in the objective function, with a finite max structure in the concave compo- nent. Our focus is on algorithms that compute (weak and standard) d(irectional)-stationary points as advocated in a recent paper by Pang, Razaviyayn and Alvarado (2016). Our linear convergence results are based on direct generalizations of the assumptions of error bounds and separation of isocost surfaces proposed in the seminal work of Luo and Tseng (1993), as well as one additional assumption of locally linear regularity regarding the intersection of certain stationary sets and dominance regions. We also discuss sufficient conditions under which these assumptions hold. We provide a realistic and nontrivial example where all assumptions hold: namely sparse estimation with strongly convex loss and the (nonconvex and nonsmooth) capped-l1 sparse penalty functions. An interesting by-product of our work is a sharper characterization of the limit set of the basic algorithm proposed by Pang, Razaviyayn and Alvarado (2016) for computing d-stationary points, and a stationarity concept stronger than d-stationarity. By using the notion of approximate subdif- ferential in variational analysis, this new stationarity naturally fits between the d-stationarity and global optimality.

## Citation

Working paper, Department of Mathematics and Statistics, Washington State University, Pullman, WA, USA, Aug. 2018.

## Article

View On the Linear Convergence of Difference-of-convex Algorithms for Nonsmooth DC Programming