Adaptive Newton-CG methods with global and local analysis for unconstrained optimization with Hölder continuous Hessian

In this paper, we study Newton-conjugate gradient (Newton-CG) methods for minimizing a nonconvex function $f$ whose Hessian is $(H_f,\nu)$-H\”older continuous with modulus $H_f>0$ and exponent \(\nu\in(0,1]\). Recently proposed Newton-CG methods for this problem \cite{he2025newton} adopt (i) non-adaptive regularization and (ii) a nested line-search procedure, where (i) often leads to inefficient early progress and the loss … Read more

Local Convergence Analysis of an Inexact Trust-Region Method for Nonsmooth Optimization

In [R. J. Baraldi and D. P. Kouri, Mathematical Programming, (2022), pp. 1–40], we introduced an inexact trust-region algorithm for minimizing the sum of a smooth nonconvex function and a nonsmooth convex function in Hilbert space—a class of problems that is ubiquitous in data science, learning, optimal control, and inverse problems. This algorithm has demonstrated … Read more

Local superlinear convergence of polynomial-time interior-point methods for hyperbolic cone optimization problems

In this paper, we establish the local superlinear convergence property of some polynomial-time interior-point methods for an important family of conic optimization problems. The main structural property used in our analysis is the logarithmic homogeneity of self-concordant barrier function, which must have {\em negative curvature}. We propose a new path-following predictor-corrector scheme, which work only … Read more