Statistical performance of subgradient step-size update rules in Lagrangian relaxations of chance-constrained optimization models

Lagrangian relaxation schemes, coupled with a subgradient procedure, are frequently employed to solve chance-constrained optimization models. The subgradient procedure typically relies on a step-size update rule. Although there is extensive research on the properties of these step-size update rules, there is little consensus on which rules are most suited in practice. This is especially so when the underlying model is a computationally challenging instance of a chance-constrained program. To close this gap, we seek to determine whether a single step-size rule can be statistically guaranteed to perform better than others. We couple the Lagrangian procedure with three strategies to identify lower bounds for two-stage chance-constrained programs. To this end, we consider two instances of such models that differ in the presence of binary variables in the second-stage. With a series of computational experiments, we demonstrate---in marked contrast to existing theoretical results---that no significant statistical differences in terms of optimality gaps can be detected between six well-known step-size update rules. Despite this, our results demonstrate that a Lagrangian procedure does provide computational benefit over a naive solution method---regardless of the underlying step-size update rule.

Citation

https://doi.org/10.1007/978-3-031-47859-8_26