A joint+marginal approach to parametric polynomial optimization

Given a compact parameter set $Y\subset R^p$, we consider polynomial optimization problems $(P_\y$) on $R^n$ whose description depends on the parameter $y\in Y$. We assume that one can compute all moments of some probability measure $\varphi$ on $Y$, absolutely continuous with respect to the Lebesgue measure (e.g. $Y$ is a box or a simplex and $\varphi$ is uniformly distributed). We then provide a hierarchy of semidefinite relaxations whose associated sequence of optimal solutions converges to the moment vector of a probability measure that encodes all information about all global optimal solutions $x^*(y)$ of $P_y$, as $y\in Y$. In particular, one may approximate as closely as desired any polynomial functional of the optimal solutions, like e.g. their $\varphi$-mean. In addition, using this knowledge on moments, the measurable function $y\mapsto x^*_k(y)$ of the $k$-th coordinate of optimal solutions, can be estimated, e.g. by maximum entropy methods. Also, for a boolean variable $x_k$, one may approximate as closely as desired its persistency $\varphi(\{y:x^*_k(y)=1\}$, i.e. the probability that in an optimal solution $x^*(y)$, the coordinate $x^*_k(y)$ takes the value $1$. At last but not least, from an optimal solution of the dual semidefinite relaxations, one provides a sequence of polynomial (resp. piecewise polynomial) lower approximations with $L_1(\varphi)$ (resp. $\varphi$-almost uniform) convergence to the optimal value function.

Citation

To appear in SIAM J. Optim.

Article

Download

View A joint+marginal approach to parametric polynomial optimization