Robust Explainable Prescriptive Analytics

We propose a new robust explainable prescriptive analytics framework that minimizes a risk-based objective function under distributional ambiguity by leveraging the data collected on the past realizations of the uncertain parameters affecting the decision model and the side information that have some predictive power on those uncertainties. The framework solves for an explainable response policy that transforms the side information directly to implementable here-and-now decisions. Such a policy should endow with the properties of facilitating explanation of the decisions, ensuring that the solutions are implementable, and maintaining the computational tractability of the optimization problem. We show that tree-based static and affine policies could achieve these salient properties. Although the historical data is available, the data-generating probability distribution remains unobservable. Hence, we adopt the data-driven robust satisficing framework to address the issue of overfitting when the empirical distribution is used for evaluating the risk-based objective function. We also suggest a localized robust satisficing model which, despite having weaker finite sample guarantees, is computationally attractive and can be applied to solving combinatorial optimization problems efficiently for a tree-based static policy. In tractable linear optimization models with recourse, we show in some restricted cases that the corresponding robust satisficing models can be solved using current tractable safe approximation techniques. We also introduce a new tractable safe approximation to address the general model when the constraints are biaffine in the outcome variables and the side information. We provide a simulation case study on how the framework can be applied to obtain an explainable policy for allocating taxis to different demand regions in response to the weather information.



View Robust Explainable Prescriptive Analytics