Exponential Decay of Sensitivity in Graph-Structured Nonlinear Programs

We study solution sensitivity for nonlinear programs (NLPs) whose structure is induced by a graph $\mathcal{G}=(\mathcal{V},\mathcal{E})$. These graph-structured NLPs arise in many applications such as dynamic optimization, stochastic optimization, optimization with partial differential equations, and network optimization. We show that the sensitivity of the primal-dual solution at node $i\in \mathcal{V}$ against a data perturbation at node $j\in \mathcal{V}$ is bounded by $\Upsilon \rho^{d_\mathcal{G}(i,j)}$ for constants $\Upsilon>0$ and $\rho\in(0,1)$ and where $d_\mathcal{G}(i,j)$ is the distance between $i$ and $j$ on $\mathcal{G}$. In other words, the sensitivity of the solution decays exponentially with the distance to the perturbation point. This result, which we call exponential decay of sensitivity (EDS), holds under fairly standard assumptions used in classical NLP sensitivity theory: the strong second-order sufficiency condition and the linear independence constraint qualification. We also present conditions under which the constants $(\Upsilon,\rho)$ remain uniformly bounded; this allows us to characterize behavior for NLPs defined over subgraphs of infinite graphs (e.g., as those arising in problems with unbounded domains). Our results provide new insights on how perturbations propagate through the NLP graph and on how the problem formulation influences such propagation. Specifically, we provide empirical evidence that positive objective curvature and constraint flexibility tend to dampen propagation. The developments are illustrated with numerical examples.

Article

Download

View PDF