Sieve-SDP: a simple facial reduction algorithm to preprocess semidefinite programs

We introduce Sieve-SDP, a simple algorithm to preprocess semidefinite programs (SDPs). Sieve-SDP belongs to the class of facial reduction algorithms. It inspects the constraints of the problem, deletes redundant rows and columns, and reduces the size of the variable matrix. It often detects infeasibility. It does not rely on any optimization solver: the only subroutine … Read more

Exact worst-case convergence rates of the proximal gradient method for composite convex minimization

We study the worst-case convergence rates of the proximal gradient method for minimizing the sum of a smooth strongly convex function and a non-smooth convex function whose proximal operator is available. We establish the exact worst-case convergence rates of the proximal gradient method in this setting for any step size and for different standard performance … Read more

Complete Facial Reduction in One Step for Spectrahedra

A spectrahedron is the feasible set of a semidefinite program, SDP, i.e., the intersection of an affine set with the positive semidefinite cone. While strict feasibility is a generic property for random problems, there are many classes of problems where strict feasibility fails and this means that strong duality can fail as well. If the … Read more

Generalized ADMM with Optimal Inde nite Proximal Term for Linearly Constrained Convex Optimization

We consider the generalized alternating direction method of multipliers (ADMM) for linearly constrained convex optimization. Many problems derived from practical applications have showed that usually one of the subproblems in the generalized ADMM is hard to solve, thus a special proximal term is added. In the literature, the proximal term can be inde nite which plays … Read more

Relative-Continuity” for Non-Lipschitz Non-Smooth Convex Optimization using Stochastic (or Deterministic) Mirror Descent

The usual approach to developing and analyzing first-order methods for non-smooth (stochastic or deterministic) convex optimization assumes that the objective function is uniformly Lipschitz continuous with parameter $M_f$. However, in many settings the non-differentiable convex function $f(\cdot)$ is not uniformly Lipschitz continuous — for example (i) the classical support vector machine (SVM) problem, (ii) the … Read more

Response to “Counterexample to global convergence of DSOS and SDSOS hierarchies”

In a recent note [8], the author provides a counterexample to the global convergence of what his work refers to as “the DSOS and SDSOS hierarchies” for polynomial optimization problems (POPs) and purports that this refutes claims in our extended abstract [4] and slides in [3]. The goal of this paper is to clarify that … Read more

On the Construction of Converging Hierarchies for Polynomial Optimization Based on Certificates of Global Positivity

In recent years, techniques based on convex optimization and real algebra that produce converging hierarchies of lower bounds for polynomial minimization problems have gained much popularity. At their heart, these hierarchies rely crucially on Positivstellens\”atze from the late 20th century (e.g., due to Stengle, Putinar, or Schm\”udgen) that certify positivity of a polynomial on an … Read more

Improving Efficiency and Scalability of Sum of Squares Optimization: Recent Advances and Limitations

It is well-known that any sum of squares (SOS) program can be cast as a semidefinite program (SDP) of a particular structure and that therein lies the computational bottleneck for SOS programs, as the SDPs generated by this procedure are large and costly to solve when the polynomials involved in the SOS programs have a … Read more

Uniqueness and Multiplicity of Market Equilibria on DC Power Flow Networks

We consider uniqueness and multiplicity of market equilibria in a short-run setup where traded quantities of electricity are transported through a capacitated network in which power flows have to satisfy the classical lossless DC approximation. The firms face fluctuating demand and decide on their production, which is constrained by given capacities. Today, uniqueness of such … Read more

Balancing Communication and Computation in Distributed Optimization

Methods for distributed optimization have received significant attention in recent years owing to their wide applicability in various domains including machine learning, robotics and sensor networks. A distributed optimization method typically consists of two key components: communication and computation. More specifically, at every iteration (or every several iterations) of a distributed algorithm, each node in … Read more