A Survey of Recent Scalability Improvements for Semidefinite Programming with Applications in Machine Learning, Control, and Robotics

Historically, scalability has been a major challenge to the successful application of semidefinite programming in fields such as machine learning, control, and robotics. In this paper, we survey recent approaches for addressing this challenge including (i) approaches for exploiting structure (e.g., sparsity and symmetry) in a problem, (ii) approaches that produce low-rank approximate solutions to … Read more

Time-Varying Semidefinite Programs

We study time-varying semidefinite programs (TV-SDPs), which are semidefinite programs whose data (and solutions) are functions of time. Our focus is on the setting where the data varies polynomially with time. We show that under a strict feasibility assumption, restricting the solutions to also be polynomial functions of time does not change the optimal value … Read more

On the Complexity of Detecting Convexity over a Box

It has recently been shown that the problem of testing global convexity of polynomials of degree four is {strongly} NP-hard, answering an open question of N.Z. Shor. This result is minimal in the degree of the polynomial when global convexity is of concern. In a number of applications however, one is interested in testing convexity … Read more

Robust-to-Dynamics Optimization

A robust-to-dynamics optimization (RDO) problem} is an optimization problem specified by two pieces of input: (i) a mathematical program (an objective function $f:\mathbb{R}^n\rightarrow\mathbb{R}$ and a feasible set $\Omega\subseteq\mathbb{R}^n$), and (ii) a dynamical system (a map $g:\mathbb{R}^n\rightarrow\mathbb{R}^n$). Its goal is to minimize $f$ over the set $\mathcal{S}\subseteq\Omega$ of initial conditions that forever remain in $\Omega$ under … Read more

On the Complexity of Testing Attainment of the Optimal Value in Nonlinear Optimization

We prove that unless P=NP, there exists no polynomial time (or even pseudo-polynomial time) algorithm that can test whether the optimal value of a nonlinear optimization problem where the objective and constraints are given by low-degree polynomials is attained. If the degrees of these polynomials are fixed, our results along with previously-known “Frank-Wolfe type” theorems … Read more

A Globally Asymptotically Stable Polynomial Vector Field with Rational Coefficients and no Local Polynomial Lyapunov Function

We give an explicit example of a two-dimensional polynomial vector field of degree seven that has rational coefficients, is globally asymptotically stable, but does not admit an analytic Lyapunov function even locally. Citation Submitted for publication Article Download View A Globally Asymptotically Stable Polynomial Vector Field with Rational Coefficients and no Local Polynomial Lyapunov Function

SOS-Convex Lyapunov Functions and Stability of Difference Inclusions

We introduce the concept of sos-convex Lyapunov functions for stability analysis of both linear and nonlinear difference inclusions (also known as discrete-time switched systems). These are polynomial Lyapunov functions that have an algebraic certificate of convexity and that can be efficiently found via semidefinite programming. We prove that sos-convex Lyapunov functions are universal (i.e., necessary … Read more

On Algebraic Proofs of Stability for Homogeneous Vector Fields

We prove that if a homogeneous, continuously differentiable vector field is asymptotically stable, then it admits a Lyapunov function which is the ratio of two polynomials (i.e., a rational function). We further show that when the vector field is polynomial, the Lyapunov inequalities on both the rational function and its derivative have sum of squares … Read more

Sum of squares certificates for stability of planar, homogeneous, and switched systems

We show that existence of a global polynomial Lyapunov function for a homogeneous polynomial vector field or a planar polynomial vector field (under a mild condition) implies existence of a polynomial Lyapunov function that is a sum of squares (sos) and that the negative of its derivative is also a sum of squares. This result … Read more

Response to “Counterexample to global convergence of DSOS and SDSOS hierarchies”

In a recent note [8], the author provides a counterexample to the global convergence of what his work refers to as “the DSOS and SDSOS hierarchies” for polynomial optimization problems (POPs) and purports that this refutes claims in our extended abstract [4] and slides in [3]. The goal of this paper is to clarify that … Read more