Minimizing the difference of convex and weakly convex functions via bundle method

We consider optimization problems with objective and constraint being the difference of convex and weakly convex functions. This framework covers a vast family of nonsmooth and nonconvex optimization problems, particularly those involving certain classes of composite and nonconvex value functions. We investigate several stationary conditions and extend the proximal bundle algorithm of [van Ackooij et al., Comput. Optim. Appl., 78 (2021), pp. 451–490 ] to compute critical points for problems in this class. Our modifications on that algorithm boil down to a different master program and an original rule to update the proximal parameter to ensure convergence in this more general setting. Thanks to this new rule, no pre-estimation of the underlying weakly-convex moduli is needed, opening the way to deal with optimization problems for which no practical and mathematically sound algorithms exist. Numerical experiments on some nonconvex stochastic problems illustrate the practical performance of the method.

Pacific Journal of OptimizationVol. 20, 2024. https://doi.org/10.61208/pjo-2024-004