Dynamic robust optimization deals with sequential, multi-stage decisions in the face of uncertain, worst-case scenarios. To manage its complexity and the curse of dimensionality, decision rules simplify the search for an optimal policy. This paper explores a middle ground between two common decision rules: simple but imprecise constant policies, and accurate but less scalable affine policies. We introduce a method that achieves linear convergence to the performance of optimal affine policies by iteratively enriching the information basis with new components. This leads to near-optimal affine policies with a compact information basis. Additionally, we offer an efficient way to build this basis, with time complexity similar to random generation. This advancement unlocks potential benefits in applications such as approximating large-scale dynamic robust optimization problems or constructing nonlinear decision rules with a small number of parameters. Our proposed approach is also amenable to parallel computations on GPUs/TPUs.