K-Adaptability in stochastic optimization

We consider stochastic problems in which both the objective function and the feasible set are affected by uncertainty. We address these problems using a K-adaptability approach, in which K solutions for the underlying problem are computed before the uncertainty dissolves and afterwards the best of them can be chosen for the realised scenario. This paradigm can be used for any underlying optimization problem. We analyze the complexity of the resulting problem from a theoretical viewpoint by showing that deciding if a feasible solution exists, is NP-hard for discrete probability distributions and that evaluating the objective function is # P-hard for continuous probability distributions. Besides that, we prove that an approximation factor for the underlying problem can be carried over to our problem. Finally, we present exact solution approaches including a branch-and-price algorithm. An extensive computational analysis compares the performances of the proposed algorithms on a large set of randomly generated instances.



View K-Adaptability in stochastic optimization