Optimizing Active Surveillance for Prostate Cancer Using Partially Observable Markov Decision Processes

We describe a finite-horizon partially observable Markov decision process (POMDP) approach to optimize decisions about whether and when to perform biopsies for patients on active surveillance for prostate cancer. The objective is to minimize a weighted combination of two criteria, the number of biopsies to conduct over a patient’s lifetime and the delay in detecting high-risk cancer that warrants more aggressive treatment. Our study also considers the impact of parameter ambiguity caused by variation across models fitted to different clinical studies and variation in the weights attributed to the reward criteria according to patient preferences. We introduce two fast approximation algorithms for the proposed model and describe some properties of optimal policies, including the existence of a control-limit type policy. The numerical results show that our approximations perform well, and we use them to compare the model-based biopsy policies to published guidelines. Although our focus is on prostate cancer active surveillance, there are lessons to be learned for applications to other chronic diseases.

Citation

Li, W., Denton, B. T., Morgan, T. M.. Optimizing Active Surveillance for Prostate Cancer Using Partially Observable Markov Decision Processes. Optimization-online, Updated on February 12, 2021

Article

Download

View PDF