This short paper presents a derivative-free quadratic regularization method for unconstrained minimization of a smooth function with Lipschitz continuous gradient. At each iteration, trial points are computed by minimizing a quadratic regularization of a local model of the objective function. The models are based on forward finite-difference gradient approximations. By using a suitable acceptance condition for the trial points, the accuracy of the gradient approximations is dynamically adjusted as a function of the regularization parameter used to control the stepsizes. Worst-case evaluation complexity bounds are established for the new method. Specifically, for nonconvex problems, it is shown that the proposed method needs at most $\mathcal{O}\left(n\epsilon^{-2}\right)$ function evaluations to generate an $\epsilon$-approximate stationary point, where $n$ is the problem dimension. For convex problems, an evaluation complexity bound of $\mathcal{O}\left(n\epsilon^{-1}\right)$ is obtained, which is reduced to $\mathcal{O}\left(n\log(\epsilon^{-1})\right)$ under strong convexity. Numerical results illustrating the performance of the proposed method are also reported.