In this paper we propose an adaptive gradient method for optimization on Riemannian manifolds. The update rule for the stepsizes relies only on gradient evaluations. Assuming that the objective function is bounded from below and that its gradient field is Lipschitz continuous, we establish worst-case complexity bounds for the number of gradient evaluations that the method requires to generate approximate an stationary point. Preliminary numerical results illustrate the potential advantages of different versions of our method in comparison with a Riemannian gradient method with Armijo line search.