In this work, we introduce a novel numerical algorithm, called RaBVItG (Radial Basis Value Iteration Game) to approximate feedback-Nash equilibria for deterministic differential games. More precisely, RaBVItG is an algorithm based on value iteration schemes in a meshfree context. It is used to approximate optimal feedback Nash policies for multi-players, trying to tackle the dimensionality that involves, in general, this type of problems. Moreover, RaBVItG also implements a game iteration structure that computes the game equilibrium at every value iteration step, in order to increase the accuracy of the solutions. Finally, with the purpose of validating our method, we apply this algorithm to a set of benchmark problems and compare the obtained results with the ones returned by another algorithm found in the literature. When comparing the numerical solutions, we observe that our algorithm is less computationally expensive and, in general, reports lower errors.