Exact Decentralized Optimization via Explicit $\ell_1$ Consensus Penalties

Consensus optimization enables autonomous agents to solve joint tasks through peer-to-peer exchanges alone. Classical decentralized gradient descent is appealing for its minimal state but fails to achieve exact consensus with fixed stepsizes unless additional trackers or dual variables are introduced. We revisit penalty methods and introduce a decentralized two-layer framework that couples an outer penalty-continuation … Read more

The Decentralized Trust-Region Method with Second-Order Approximations

This paper presents a novel decentralized trust-region framework that systematically incorporates second-order information to solve general nonlinear optimization problems in multi-agent networks. Our approach constructs local quadratic models that simultaneously capture objective curvature and enforce consensus through penalty terms, while supporting multiple Hessian approximation strategies including exact Hessians, limited-memory quasi-Newton methods, diagonal preconditioners, and matrix-free … Read more