A Novel Stepsize for Gradient Descent Method

In this paper, we propose a novel stepsize for the classical gradient descent scheme to solve unconstrained nonlinear optimization problems. We are concerned with the convex and smooth objective without the globally Lipschitz gradient condition. Our new method just needs the locally Lipschitz gradient but still gets the rate $O(\frac{1}{k})$ of $f(x^k)-f_*$ at most. By … Read more

Computing the Completely Positive Factorization via Alternating Minimization

In this article, we propose a novel alternating minimization scheme for finding completely positive factorizations. In each iteration, our method splits the original factorization problem into two optimization subproblems, the first one being a orthogonal procrustes problem, which is taken over the orthogoal group, and the second one over the set of entrywise positive matrices. … Read more

Accelerating nuclear-norm regularized low-rank matrix optimization through Burer-Monteiro decomposition

This work proposes a rapid algorithm, BM-Global, for nuclear-norm-regularized convex and low-rank matrix optimization problems. BM-Global efficiently decreases the objective value via low-cost steps leveraging the nonconvex but smooth Burer-Monteiro (BM) decomposition, while effectively escapes saddle points and spurious local minima ubiquitous in the BM form to obtain guarantees of fast convergence rates to the … Read more

Factorization of completely positive matrices using iterative projected gradient steps

We aim to factorize a completely positive matrix by using an optimization approach which consists in the minimization of a nonconvex smooth function over a convex and compact set. To solve this problem we propose a projected gradient algorithm with parameters that take into account the effects of relaxation and inertia. Both projection and gradient … Read more

Beyond Alternating Updates for Matrix Factorization with Inertial Bregman Proximal Gradient Algorithms

Matrix Factorization is a popular non-convex objective, for which alternating minimization schemes are mostly used. They usually suffer from the major drawback that the solution is biased towards one of the optimization variables. A remedy is non-alternating schemes. However, due to a lack of Lipschitz continuity of the gradient in matrix factorization problems, convergence cannot … Read more

The Sound of APALM Clapping: Faster Nonsmooth Nonconvex Optimization with Stochastic Asynchronous PALM

We introduce the Stochastic Asynchronous Proximal Alternating Linearized Minimization (SAPALM) method, a block coordinate stochastic proximal-gradient method for solving nonconvex, nonsmooth optimization problems. SAPALM is the first asynchronous parallel optimization method that provably converges on a large class of nonconvex, nonsmooth problems. We prove that SAPALM matches the best known rates of convergence — among … Read more