The Decentralized Trust-Region Method with Second-Order Approximations

This paper presents a novel decentralized trust-region framework that systematically incorporates second-order information to solve general nonlinear optimization problems in multi-agent networks. Our approach constructs local quadratic models that simultaneously capture objective curvature and enforce consensus through penalty terms, while supporting multiple Hessian approximation strategies including exact Hessians, limited-memory quasi-Newton methods, diagonal preconditioners, and matrix-free finite-difference schemes. Under standard smoothness and strong convexity assumptions, we prove global linear convergence to the unique optimizer and establish local quadratic convergence when Hessian surrogates accurately approximate the true curvature. Our theoretical analysis explicitly quantifies how network topology, penalty parameters, and approximation quality jointly influence convergence rates and trust-region acceptance criteria. Extensive experiments on benchmark nonlinear optimization problems demonstrate that our method achieves significant improvements in communication efficiency, reducing communication rounds and computational performance, compared to state-of-the-art first-order methods baselines, validating both the theoretical contributions and practical advantages of the proposed approach.

Article

Download

View PDF