Accelerated Gradient Descent via Long Steps

Recently Grimmer [1] showed for smooth convex optimization by utilizing longer steps periodically, gradient descent’s state-of-the-art O(1/T) convergence guarantees can be improved by constant factors, conjecturing an accelerated rate strictly faster than O(1/T) could be possible. Here we prove such a big-O gain, establishing gradient descent’s first accelerated convergence rate in this setting. Namely, we … Read more

Hidden convexity, optimization, and algorithms on rotation matrices

This paper studies hidden convexity properties associated with constrained optimization problems over the set of rotation matrices \(\text{SO}(n)\). Such problems are nonconvex due to the constraint\(X\in\text{SO}(n)\). Nonetheless, we show that certain linear images of \(\text{SO}(n)\) are convex, opening up the possibility for convex optimization algorithms with provable guarantees for these problems. Our main technical contributions … Read more