Bound Propagation for Linear Inequalities Revisited

In 2011, Korovin and Voronkov (Proceedings of the 23rd International Conference on Automated Deduction, vol. 6803 of Lecture Notes in Computer Science, pp. 369-383) proposed a method based on bound propagation for solving systems of linear inequalities. In this paper, an alternate description of their algorithm which also incorporates an addition that returns a certificate … Read more

Column-Randomized Linear Programs: Performance Guarantees and Applications

We propose a randomized method for solving linear programs with a large number of columns but a relatively small number of constraints. Since enumerating all the columns is usually unrealistic, such linear programs are commonly solved by column generation, which is often still computationally challenging due to the intractability of the subproblem in many applications. … Read more

Necessary and sufficient conditions for rank-one generated cones

A closed convex conic subset $\cS$ of the positive semidefinite (PSD) cone is rank-one generated (ROG) if all of its extreme rays are generated by rank-one matrices. The ROG property of $\cS$ is closely related to the exactness of SDP relaxations of nonconvex quadratically constrained quadratic programs (QCQPs) related to $\cS$. We consider the case … Read more

Ideal formulations for constrained convex optimization problems with indicator variables.

Motivated by modern regression applications, in this paper, we study the convexification of a class of convex optimization problems with indicator variables and combinatorial constraints on the indicators. Unlike most of the previous work on convexification of sparse regression problems, we simultaneously consider the nonlinear non-separable objective, indicator variables, and combinatorial constraints. Specifically, we give … Read more

Memory-efficient structured convex optimization via extreme point sampling

Memory is a key computational bottleneck when solving large-scale convex optimization problems such as semidefinite programs (SDPs). In this paper, we focus on the regime in which storing an n × n matrix decision variable is prohibitive. To solve SDPs in this regime, we develop a randomized algorithm that returns a random vector whose covariance … Read more

A simplified treatment of Ramana’s exact dual for semidefinite programming

In semidefinite programming the dual may fail to attain its optimal value and there could be a duality gap, i.e., the primal and dual optimal values may differ. In a striking paper, Ramana proposed a polynomial size extended dual that does not have these deficiencies and yields a number of fundamental results in complexity theory. … Read more

A Restricted Dual Peaceman-Rachford Splitting Method for QAP

We revisit and strengthen splitting methods for solving doubly nonnegative, DNN, relaxations of the quadratic assignment problem, QAP. We use a modified restricted contractive splitting method, rPRSM, approach. Our strengthened bounds and new dual multiplier estimates improve on the bounds and convergence results in the literature. CitationDepartment of Combinatorics & Optimization, University of Waterloo, Canada,06/2019ArticleDownload … Read more

The Equivalence of Fourier-based and Wasserstein Metrics on Imaging Problems

We investigate properties of some extensions of a class of Fourier-based probability metrics, originally introduced to study convergence to equilibrium for the solution to the spatially homogeneous Boltzmann equation. At difference with the original one, the new Fourier-based metrics are well-defined also for probability distributions with different centers of mass, and for discrete probability measures … Read more

Inexact and Stochastic Generalized Conditional Gradient with Augmented Lagrangian and Proximal Step

In this paper we propose and analyze inexact and stochastic versions of the CGALP algorithm developed in the authors’ previous paper, which we denote ICGALP, that allows for errors in the computation of several important quantities. In particular this allows one to compute some gradients, proximal terms, and/or linear minimization oracles in an inexact fashion … Read more

Solving Large-Scale Sparse PCA to Certifiable (Near) Optimality

Sparse principal component analysis (PCA) is a popular dimensionality reduction technique for obtaining principal components which are linear combinations of a small subset of the original features. Existing approaches cannot supply certifiably optimal principal components with more than $p=100s$ of variables. By reformulating sparse PCA as a convex mixed-integer semidefinite optimization problem, we design a … Read more