Projected proximal gradient trust-region algorithm\\ for nonsmooth optimization

We consider trust-region methods for solving optimization problems where the objective is the sum of a smooth, nonconvex function and a nonsmooth, convex regularizer. We extend the global convergence theory of such methods to include worst-case complexity bounds in the case of unbounded model Hessian growth, and introduce a new, simple nonsmooth trust-region subproblem solver based on combining several iterations of proximal gradient descent with a single projection into the trust region, which meets the sufficient descent requirements for algorithm convergence and has promising numerical results.

Citation

Minh N. Dao, Hung M. Phan, and Lindon Roberts. Projected proximal gradient trust-region algorithm for nonsmooth optimization. Technical report (2025).

Article

Download

View PDF