In [R. J. Baraldi and D. P. Kouri, Mathematical Programming, (2022), pp. 1–40], we introduced an inexact trust-region algorithm for minimizing the sum of a smooth nonconvex function and a nonsmooth convex function in Hilbert space—a class of problems that is ubiquitous in data science, learning, optimal control, and inverse problems. This algorithm has demonstrated excellent performance and scalability with problem size. In this paper, we enrich the convergence analysis for this algorithm, proving strong convergence of the iterates with guaranteed rates. In particular, we demonstrate that the trust-region algorithm recovers superlinear, even quadratic, convergence rates when using a second-order Taylor approximation of the smooth objective function term.
Article
View Local Convergence Analysis of an Inexact Trust-Region Method for Nonsmooth Optimization