Stochastic decomposition (SD) has been a computationally effective approach to solve large-scale stochastic programming (SP) problems arising in practical applications. By using incremental sampling, this approach is designed to discover an appropriate sample size for a given SP instance, thus precluding the need for either scenario reduction or arbitrary sample sizes to create sample average approximations (SAA). SD provides solutions of similar quality in far less computational time using ordinarily available computational resources. However, previous versions of SD did not allow randomness to appear in the second-stage cost coefficients. In this paper, we extend its capabilities by relaxing this assumption on cost coefficients in the second-stage. In addition to the algorithmic enhancements necessary to achieve this, we also present the details of implementing these extensions which preserve the computational edge of SD. Finally, we demonstrate the results obtained from the latest implementation of SD on a variety of test instances generated for problems from the literature. We compare these results with those obtained from the regularized L-shaped method applied to the SAA function with different sample sizes.