Home care provides personalized medical care and social support to patients within their own home. Our work proposes a dynamic scheduling framework to assist in the assignment of patients to health practitioners (HPs) at a single home care agency. We model the decision of which patients to assign to HPs as a discrete-time Markov decision process. Due to the curse of dimensionality and the difficult combinatorial structure associated with an HP's daily travel, we propose an approximate dynamic programming approach (ADP). Our method is based on a one-step policy improvement heuristic and includes both predictive and prescriptive components. In particular, the ADP policy is formulated as a stochastic program that estimates future HP assignments using patient features, such as the type of treatment that is required and the region of residence. We show how to obtain these estimates using supervised learning techniques or by solving a mathematical program. We then solve the stochastic program using a novel adaption of the integer L-shaped algorithm and present valid inequalities to speed up convergence. We investigate several extensions to account for multiple patient-HP assignments, patients who return for service, and periodic care. Our solution methodology is compared to existing policies in a discrete-event simulation using data from a Canadian home care provider.
Working Paper, 2018, Dept. of Management, University of Toronto Scarborough and Rotman School of Management, Toronto, Ontario M1C-1A4, Schulich School of Business, York University, 111 Ian Macdonald Boulevard, Toronto, Ontario, M3J 1P3