Newton’s method for unconstrained optimization, subject to proper regularization or special trust-region procedures, finds first-order stationary points with precision $\varepsilon$ employing, at most, $O(\varepsilon^{-3/2})$ functional and derivative evaluations. However, the computer work per iteration of the best-known implementations may need several factorizations per iteration or may use rather expensive matrix decompositions. In this paper, we introduce a method that, preserving most features of the regularization approach, uses only one cheap factorization per iteration, as well as the same number of gradient and Hessian evaluations. We prove complexity and convergence results, even in the case in which the Hessians of the subproblems are far from being Hessians of the objective function. We also present fairly successful and fully reproducible numerical experiments and we make available the corresponding software.