Markov Chain-based Policies for Multi-stage Stochastic Integer Linear Programming with an Application to Disaster Relief Logistics

We introduce an aggregation framework to address multi-stage stochastic programs with mixed-integer state variables and continuous local variables (MSILPs). Our aggregation framework imposes additional structure to the integer state variables by leveraging the information of the underlying stochastic process, which is modeled as a Markov chain (MC). We demonstrate that the aggregated MSILP can be solved exactly via a branch-and-cut algorithm integrated with a variant of stochastic dual dynamic programming. To improve tractability, we propose to use this approach to obtain dual bounds. Moreover, we apply two-stage linear decision rule (2SLDR) approximations, in particular a new MC-based variant that we propose, to obtain high-quality decision policies with significantly reduced computational effort. We test the proposed methodologies in an MSILP model for hurricane disaster relief logistics planning. Our empirical evaluation compares the effectiveness of the various proposed approaches and analyzes the trade-offs between policy flexibility, solution quality, and computational effort. Specifically, the 2SLDR approximation yields provable high-quality solutions for our test instances supported by the proposed bounding procedure. We also extract valuable managerial insights from the solution behaviors exhibited by the underlying decision policies.

Article

Download

View PDF