We consider the context of ``simulation-based recursions," that is, recursions that involve quantities needing to be estimated using a stochastic simulation. Examples include stochastic adaptations of fixed-point and gradient descent recursions obtained by replacing function and derivative values appearing within the recursion by their Monte Carlo counterparts. The primary motivating settings are Simulation Optimization and Stochastic Root Finding problems, where the low-point and the zero of a function are sought, respectively, with only Monte Carlo estimates of the functions appearing within the problem. We ask how much Monte Carlo sampling needs to be performed within simulation-based recursions in order that the resulting iterates remain consistent, and more importantly, \emph{efficient}, where ``efficient" implies convergence at the fastest possible rate. Answering these questions involves trading-off two types of error inherent in the iterates: the deterministic error due to recursion and the ``stochastic" error due to sampling. As we demonstrate through a characterization of the relationship between sample sizing and convergence rates, efficiency and consistency are intimately coupled with the speed of the underlying recursion, with faster recursions yielding a wider regime of ``optimal" sampling rates. The implications of our results to practical implementation are immediate since they provide specific guidance on optimal simulation expenditure within a variety of stochastic recursions.