Guaranteed bounds for optimal stopping problems using kernel-based non-asymptotic uniform confidence bands

In this paper, we introduce an approach for obtaining probabilistically guaranteed upper and lower bounds on the true optimal value of stopping problems. Bounds of existing simulation-and-regression approaches, such as those based on least squares Monte Carlo and information relaxation, are stochastic in nature and therefore do not come with a finite sample guarantee. Our data-driven approach is fundamentally different as it allows replacing the sampling error with a pre-specified confidence level. The key to this approach is to use high- and low-biased estimates that are guaranteed to over- and underestimate, respectively, the conditional expected continuation value that appears in the stopping problem’s dynamic programming formulation with a pre-specified confidence level. By incorporating these guaranteed over- and underestimates into a backward recursive procedure, we obtain probabilistically guaranteed bounds on the problem’s true optimal value. As a byproduct we present novel kernel-based non-asymptotic uniform confidence bands for regression functions from a reproducing kernel Hilbert space. We derive closed-form formulas for the cases where the data-generating distribution is either known or unknown, which makes our data-driven approach readily applicable in a range of practical situations including simulation. We illustrate the applicability of the proposed bounding procedure by valuing a Bermudan put option.

Article

Download

View PDF