Revisiting Superlinear Convergence of Proximal Newton-Like Methods to Degenerate Solutions
\(\)
We describe inexact proximal Newton-like methods for solving degenerate regularized optimization problems and for the broader problem of finding a zero of a generalized equation that is the sum of a continuous map and a maximal monotone operator. Superlinear convergence for both the distance to the solution set and a certain measure of first-order optimality can be achieved under a Hölderian error bound condition, including for problems in which the continuous map is nonmonotone, with Jacobian singular at the solution and not Lipschitz. Superlinear convergence is attainable even when the Jacobian is merely uniformly continuous, relaxing the standard Lipschitz assumption to its theoretical limit. For convex regularized optimization problems, we introduce a novel globalization strategy that ensures strict objective decrease and avoids the Maratos effect, attaining local \(Q\)-superlinear convergence without prior knowledge of problem parameters. Unit step size acceptance in our line search strategy does not rely on continuity or even existence of the Hessian of the smooth term in the objective, making the framework compatible with other potential candidates for superlinearly convergent updates.