We present a feedback scheme for non-cooperative dynamic games and investigate its stabilizing properties. The dynamic games are modeled as generalized Nash equilibrium problems (GNEP), in which the shared constraint consists of linear time-discrete dynamic equations (e.g., sampled from a partial or ordinary differential equation), which are jointly controlled by the players’ actions. Further, the individual objectives of the players are interdependent and defined over a fixed time horizon. The feedback law is synthesized by moving-horizon model predictive control (MPC). We investigate the asymptotic stability of the resulting closed-loop dynamics. To this end, we introduce α-quasi GNEPs, a family of auxiliary problems based on a modification of the Nikaido-Isoda function, which approximate the original games. Basing the MPC scheme on these auxiliary problems, we derive conditions on the players’ objectives, which guarantee asymptotic stability of the closed-loop if stabilizing end constraints are enforced. This analysis is based on showing that the associated optimal-value function is a Lyapunov function. Additionally, we identify a suitable Lyapunov function for the MPC scheme based on the original GNEP, whose solution fulfills the stabilizing end constraints. The theoretical results are complemented by numerical experiments.