A polynomial matrix inequality is a statement that a symmetric polynomial
matrix is positive semidefinite over a given constraint set. Polynomial matrix
optimization concerns minimizing the smallest eigenvalue of a symmetric
polynomial matrix subject to a tuple of polynomial matrix inequalities. This
work explores the use of sparsity methods in reducing the complexity of
sum-of-squares based methods in verifying polynomial matrix inequalities or
solving polynomial matrix optimization. In the unconstrained setting, Newton
polytopes can be employed to sparsify the monomial basis, resulting in smaller
semidefinite programs. In the general setting, we show how to exploit different
types of sparsity (term sparsity, correlative sparsity, matrix sparsity)
encoded in polynomial matrices to derive sparse semidefinite programming
relaxations for polynomial matrix optimization. For term sparsity, one
intriguing phenomenon is that the related block structures do not necessarily
converge to the one determined by sign symmetries, which is significantly
distinguished from the scalar case. For correlative sparsity, unlike the scalar
case, we provide a counterexample showing that asymptotic convergence does not
hold under the Archimedean condition and the running intersection property. By
employing the theory of matrix-valued measures, we establish several results on
detecting global optimality and retrieving optimal solutions under correlative
sparsity. The effectiveness of sparsity methods on reducing computational
complexity is demonstrated on various examples of polynomial matrix
optimization.
Article
Download
View PDF