A stochastic program typically involves several parameters, including deterministic first-stage parameters and stochastic second-stage elements that serve as input data. These programs are re-solved whenever any input parameter changes. However, in practical applications, quick decision-making is necessary, and solving a stochastic program from scratch for every change in input data can be computationally costly. This work addresses this challenge for two-stage stochastic linear programs (2-SLPs) with varying right-hand sides for the first-stage constraints. We construct a Piecewise Linear Difference-of-Convex (PLDC) policy by leveraging optimal bases from previous solves. This PLDC policy retains optimal solutions for previously encountered parameters and provides high-quality solutions for new right-hand-side vectors. Our proposed policy directly applies to the extensive form of the 2-SLP. When stage decomposition algorithms, such as the L-Shaped and Stochastic Decomposition, are applied to solve the 2-SLPs, we develop L-Shaped- and Stochastic-Decomposition-guided static procedures to train the policy. We also develop a sequential procedure that iteratively tracks the quality of the learned policy and incorporates new basis information to improve it. We assess the performance of our policy through analytical and numerical techniques. Our compelling experimental results show that the policy prescribes solutions that are feasible and optimal for a significant percentage of new instances.