Semidefinite programming by Projective Cutting-Planes

\(\) Seeking tighter relaxations of combinatorial optimization problems, semidefinite programming is a generalization of linear programming that offers better bounds and is still polynomially solvable. Yet, in practice, a semidefinite program is still significantly harder to solve than a similar-size Linear Program (LP). It is well-known that a semidefinite program can be written as an … Read more

Combining Precision Boosting with LP Iterative Refinement for Exact Linear Optimization

This article studies a combination of the two state-of-the-art algorithms for the exact solution of linear programs (LPs) over the rational numbers, i.e., without any roundoff errors or numerical tolerances. By integrating the method of precision boosting inside an LP iterative refinement loop, the combined algorithm is able to leverage the strengths of both methods: … Read more

Higher-Order Newton Methods with Polynomial Work per Iteration

\(\) We present generalizations of Newton’s method that incorporate derivatives of an arbitrary order \(d\) but maintain a polynomial dependence on dimension in their cost per iteration. At each step, our \(d^{\text{th}}\)-order method uses semidefinite programming to construct and minimize a sum of squares-convex approximation to the \(d^{\text{th}}\)-order Taylor expansion of the function we wish … Read more

Exact Solutions for the NP-hard Wasserstein Barycenter Problem using a Doubly Nonnegative Relaxation and a Splitting Method

\(\) The simplified Wasserstein barycenter problem, also known as the cheapest hub problem, consists in selecting one point from each of \(k\) given sets, each set consisting of \(n\) points, with the aim of minimizing the sum of distances to the barycenter of the \(k\) chosen points. This problem is also known as the cheapest … Read more

Geometry of exactness of moment-SOS relaxations for polynomial optimization

The moment-SOS (sum of squares) hierarchy is a powerful approach for solving globally non-convex polynomial optimization problems (POPs) at the price of solving a family of convex semidefinite optimization problems (called moment-SOS relaxations) of increasing size, controlled by an integer, the relaxation order. We say that a relaxation of a given order is exact if … Read more

On Tractable Convex Relaxations of Standard Quadratic Optimization Problems under Sparsity Constraints

Standard quadratic optimization problems (StQPs) provide a versatile modelling tool in various applications. In this paper, we consider StQPs with a hard sparsity constraint, referred to as sparse StQPs. We focus on various tractable convex relaxations of sparse StQPs arising from a mixed-binary quadratic formulation, namely, the linear optimization relaxation given by the reformulation-linearization technique, … Read more

Functions associated with the nonconvex second-order cone

The nonconvex second-order cone (nonconvex SOC for short) is a nonconvex extension to the convex second-order cone, in the sense that it consists of any vector divided into two sub-vectors for which the Euclidean norm of the first sub-vector is at least as large as the Euclidean norm of the second sub-vector. This cone can … Read more

Cone product reformulation for global optimization

In this paper, we study nonconvex optimization problems involving sum of linear times convex (SLC) functions as well as conic constraints belonging to one of the five basic cones, that is, linear cone, second order cone, power cone, exponential cone, and semidefinite cone. By using the Reformulation Perspectification Technique, we can obtain a convex relaxation … Read more

Range of the displacement operator of PDHG with applications to quadratic and conic programming

Primal-dual hybrid gradient (PDHG) is a first-order method for saddle-point problems and convex programming introduced by Chambolle and Pock. Recently, Applegate et al. analyzed the behavior of PDHG when applied to an infeasible or unbounded instance of linear programming, and in particular, showed that PDHG is able to diagnose these conditions. Their analysis hinges on … Read more

Error estimate for regularized optimal transport problems via Bregman divergence

Regularization by the Shannon entropy enables us to efficiently and approximately solve optimal transport problems on a finite set. This paper is concerned with regularized optimal transport problems via Bregman divergence. We introduce the required properties for Bregman divergences, provide a non-asymptotic error estimate for the regularized problem, and show that the error estimate becomes … Read more