We consider the problem of accurate computation of the finite difference $f(\x+\s)-f(\x)$ when $\Vert\s\Vert$ is very small. Direct evaluation of this difference in floating point arithmetic succumbs to cancellation error and yields 0 when $\s$ is sufficiently small. Nonetheless, accurate computation of this finite difference is required by many optimization algorithms for a “sufficient decrease” test. Reps and Rall proposed a programmatic transformation called “computational divided differencing” reminiscent of automatic differentiation to compute these differences with high accuracy. The running time to compute the difference is a small constant multiple of the running time to compute $f$. Unlike automatic differentiation, however, the technique is not fully general because of a difficulty with branching code (i.e., `if’ statements). We make several remarks about the application of computational divided differencing to optimization. One point is that the technique can be used effectively as a stagnation test.