Convex optimization on CAT(0) cubical complexes

We consider geodesically convex optimization problems involving distances to a finite set of points A in a CAT(0) cubical complex. Examples include the minimum enclosing ball problem, the weighted mean and median problems, and the feasibility and projection problems for intersecting balls with centers in A. We propose a decomposition approach relying on standard Euclidean … Read more

Horoballs and the subgradient method

To explore convex optimization on Hadamard spaces, we consider an iteration in the style of a subgradient algorithm. Traditionally, such methods assume that the underlying spaces are manifolds and that the objectives are geodesically convex: the methods are described using tangent spaces and exponential maps. By contrast, our iteration applies in a general Hadamard space, … Read more

Accelerated Gradient Descent via Long Steps

Recently Grimmer [1] showed for smooth convex optimization by utilizing longer steps periodically, gradient descent’s state-of-the-art O(1/T) convergence guarantees can be improved by constant factors, conjecturing an accelerated rate strictly faster than O(1/T) could be possible. Here we prove such a big-O gain, establishing gradient descent’s first accelerated convergence rate in this setting. Namely, we … Read more

Provably Faster Gradient Descent via Long Steps

This work establishes provably faster convergence rates for gradient descent in smooth convex optimization via a computer-assisted analysis technique. Our theory allows nonconstant stepsize policies with frequent long steps potentially violating descent by analyzing the overall effect of many iterations at once rather than the typical one-iteration inductions used in most first-order method analyses. We … Read more

On the Convergence of Asynchronous Parallel Iteration with Arbitrary Delays

Recent years have witnessed the surge of asynchronous parallel (async-parallel) iterative algorithms due to problems involving very large-scale data and a large number of decision variables. Because of asynchrony, the iterates are computed with outdated information, and the age of the outdated information, which we call \emph{delay}, is the number of times it has been … Read more

Reliable solution of convex quadratic programs with parametric active set methods

Parametric Active Set Methods (PASM) are a relatively new class of methods to solve convex Quadratic Programming (QP) problems. They are based on tracing the solution along a linear homotopy between a QP with known solution and the QP to be solved. We explicitly identify numerical challenges in PASM and develop strategies to meet these … Read more

Generic nondegeneracy in convex optimization

We show that minimizers of convex functions subject to almost all linear perturbations are nondegenerate. An analogous result holds more generally, for lower-C^2 functions. CitationCornell University, School of Operations Research and Information Engineering, 206 Rhodes Hall Cornell University Ithaca, NY 14853. May 2010. ArticleDownload View PDF

Convex optimization problems involving finite autocorrelation sequences

We discuss convex optimization problems where some of the variables are constrained to be finite autocorrelation sequences. Problems of this form arise in signal processing and communications, and we describe applications in filter design and system identification. Autocorrelation constraints in optimization problems are often approximated by sampling the corresponding power spectral density, which results in … Read more

Handling Nonnegative Constraints in Spectral Estimation

We consider convex optimization problems with the constraint that the variables form a finite autocorrelation sequence, or equivalently, that the corresponding power spectral density is nonnegative. This constraint is often approximated by sampling the power spectral density, which results in a set of linear inequalities. It can also be cast as a linear matrix inequality … Read more