A Proximal Quasi-Newton Trust-Region Method for Nonsmooth Regularized Optimization

We develop a trust-region method for minimizing the sum of a smooth term f and a nonsmooth term h, both of which can be nonconvex. Each iteration of our method minimizes apossibly nonconvex model of f+h in a trust region. The model coincides with f+h in value and subdifferential at the center. We establish global convergence to a first-order stationary point when f satisfies a smoothness condition that holds, in particular, when it has Lipschitz-continuous gradient, and h is proper and lower semi-continuous. The model of h is required to be proper, lower-semi-continuous and prox-bounded. Under these weak assumptions, we establish a worst-case O(1/ε²) iteration complexity bound that matches the best known complexity bound of standard trust-region methods for smooth optimization. We detail a special instance in which we use a limited-memory quasi-Newton model of f and compute a step with the proximal gradient method, resulting in a practical proximal quasi-Newton method. We establish similar convergence properties and complexity bound for a quadratic regularization variant, and provide an interpretation as a proximal gradient method with adaptive step size for nonconvex problems. We describe our Julia implementations and report numerical results on inverse problems from sparse optimization and signal processing.Our trust-region algorithm exhibits promising performance and compares favorably with linesearch proximal quasi-Newton methods based on convex models.

Citation

@TechReport{aravkin-baraldi-orban-2021, author = {A. Aravkin and R. Baraldi and D. Orban}, title = {A Proximal Quasi-{N}ewton Trust-Region Method for Nonsmooth Regularized Optimization}, institution = {GERAD}, year = {2021}, type = {Cahier du GERAD}, number = {G-2021-12}, address = {Montr\'eal, QC, Canada}, grant = NSERC, preprint = {https://www.gerad.ca/en/papers/G-2021-12/view}, doi = {10.13140/RG.2.2.18509.15845}, month = {March}, }

Article

Download

View PDF