We consider combinatorial optimization problems with uncertain objective functions. In the min-max-min robust optimization approach, a fixed number k of feasible solutions is computed such that the respective best of them is optimal in the worst case. The idea is to calculate a set of candidate solutions in a potentially expensive preprocessing and then select the best solution out of this set in real-time, once the actual scenario is known. In this paper, we investigate the complexity of this min-max-min problem in the case of discrete uncertainty, as well as its connection to the classical min-max robust counterpart. Essentially, it turns out that the min-max-min problem is not harder to solve than the min-max problem, while producing much better solutions in general.