Gradient Descent only Converges to Minimizers Published: 2016/02/17 Michael I. JordanJason D. LeeBenjamin RechtMax SimchowitzCategories Unconstrained Optimization Tags gradient descent, local minimizer, nonconvex optimization, saddle point problem Short URL: https://optimization-online.org/?p=13835 We show that gradient descent converges to a local minimizer, almost surely with random initialization. This is proved by applying the Stable Manifold Theorem from dynamical systems theory. ArticleDownload View PDF