Separating Doubly Nonnegative and Completely Positive Matrices

The cone of Completely Positive (CP) matrices can be used to exactly formulate a variety of NP-Hard optimization problems. A tractable relaxation for CP matrices is provided by the cone of Doubly Nonnegative (DNN) matrices; that is, matrices that are both positive semidefinite and componentwise nonnegative. A natural problem in the optimization setting is then … Read more

The matricial relaxation of a linear matrix inequality

Given linear matrix inequalities (LMIs) L_1 and L_2, it is natural to ask: (Q1) when does one dominate the other, that is, does L_1(X) PsD imply L_2(X) PsD? (Q2) when do they have the same solution set? Such questions can be NP-hard. This paper describes a natural relaxation of an LMI, based on substituting matrices … Read more

An inexact interior point method for L1-regularized sparse covariance selection

Sparse covariance selection problems can be formulated as log-determinant (log-det) semidefinite programming (SDP) problems with large numbers of linear constraints. Standard primal-dual interior-point methods that are based on solving the Schur complement equation would encounter severe computational bottlenecks if they are applied to solve these SDPs. In this paper, we consider a customized inexact primal-dual … Read more

On Computation of Performance Bounds of Optimal Index Assignment

Channel-optimized index assignment of source codewords is arguably the simplest way of improving transmission error resilience, while keeping the source and/or channel codes intact. But optimal design of index assignment is an in- stance of quadratic assignment problem (QAP), one of the hardest optimization problems in the NP-complete class. In this paper we make a … Read more

On the complexity of computing the handicap of a sufficient matrix

The class of sufficient matrices is important in the study of the linear complementarity problem(LCP) – some interior point methods (IPM’s) for LCP’s with sufficient data matrices have complexity polynomial in the bit size of the matrix and its handicap. In this paper we show that the handicap of a sufficient matrix may be exponential … Read more

Discriminants and Nonnegative Polynomials

For a semialgebraic set K in R^n, let P_d(K) be the cone of polynomials in R^n of degrees at most d that are nonnegative on K. This paper studies the geometry of its boundary. When K=R^n and d is even, we show that its boundary lies on the irreducible hypersurface defined by the discriminant of … Read more

A high-performance software package for semidefinite programs: SDPA 7

The SDPA (SemiDefinite Programming Algorithm) Project launched in 1995 has been known to provide high-performance packages for solving large-scale Semidefinite Programs (SDPs). SDPA Ver. 6 solves efficiently large-scale dense SDPs, however, it required much computation time compared with other software packages, especially when the Schur complement matrix is sparse. SDPA Ver. 7 is now completely … Read more

Superlinear Convergence of Infeasible Predictor-Corrector Path-Following Interior Point Algorithm for SDLCP using the HKM Direction

Interior point method (IPM) defines a search direction at each interior point of a region. These search directions form a direction field which in turn gives rise to a system of ordinary differential equations (ODEs). The solutions of the system of ODEs can be viewed as underlying paths in the interior of the region. In … Read more

Copositivity detection by difference-of-convex decomposition and omega-subdivision

We present three new copositivity tests based upon difference-of-convex (d.c.) decompositions, and combine them to a branch-and-bound algorithm of $\omega$-subdivision type. The tests employ LP or convex QP techniques, but also can be used heuristically using appropriate test points. We also discuss the selection of efficient d.c.~decompositions and propose some preprocessing ideas based on the … Read more

A joint+marginal approach to parametric polynomial optimization

Given a compact parameter set $Y\subset R^p$, we consider polynomial optimization problems $(P_\y$) on $R^n$ whose description depends on the parameter $y\in Y$. We assume that one can compute all moments of some probability measure $\varphi$ on $Y$, absolutely continuous with respect to the Lebesgue measure (e.g. $Y$ is a box or a simplex and … Read more