A progressive decoupling algorithm for minimizing the difference of convex and weakly convex functions over a linear subspace

Commonly, decomposition and splitting techniques for optimization problems strongly depend on convexity. 
Implementable splitting methods for nonconvex and nonsmooth optimization problems are scarce and often lack convergence guarantees. 
Among the few exceptions is the Progressive Decoupling Algorithm (PDA), which has local convergence should convexity be elicitable. 
In this work, we furnish PDA with a descent test and extend the method to accommodate a broad class of nonsmooth optimization 
problems with non-elicitable convexity. More precisely, we focus on the problem of minimizing the difference of 
convex and weakly convex functions over a linear subspace. This framework covers, in particular, a family of stochastic 
programs with nonconvex recourse and statistical estimation problems for supervised learning.

Citation

Journal of Optimization Theory and Applications. DOI: 10.1007/s10957-024-02574-4