Second-order Guarantees of Distributed gradient Algorithms

We consider distributed smooth nonconvex unconstrained optimization over networks, modeled as a connected graph. We examine the behavior of distributed gradient-based algorithms near strict saddle points. Specifically, we establish that (i) the renowned Distributed Gradient Descent (DGD) algorithm likely converges to a neighborhood of a Second-order Stationary (SoS) solution; and (ii) the more recent class of distributed algorithms based on gradient tracking--implementable also over digraphs--likely converges to exact SoS solutions, thus avoiding strict saddle-points. Furthermore, a convergence rate is provided for the latter class of algorithms.

Citation

preprint, Purdue University Technical Report, September 23, 2018

Article

Download

View Second-order Guarantees of Distributed gradient Algorithms