Data Collaboration (DC) enables multiple parties to jointly train a model by sharing only linear projections of their private datasets. The core challenge in DC is to align the bases of these projections without revealing each party’s secret basis. While existing theory suggests that any target basis spanning the common subspace should suffice, in practice, the choice of basis can substantially affect both accuracy and numerical stability. We introduce Orthonormal Data Collaboration (ODC), which enforces orthonormal secret and target bases, thereby reducing alignment to the classical Orthogonal Procrustes problem, which admits a closed-form solution. We prove that the resulting change-of-basis matrices achieve \emph{orthogonal concordance}, aligning all parties’ representations up to a shared orthogonal transform and rendering downstream performance invariant to the target basis. Computationally, ODC reduces the alignment complexity from O(\min{a(cl)^2,a^2c}) to O(acl^2), and empirical evaluations show up to \(100\times\) speed-ups with equal or better accuracy across benchmarks. ODC preserves DC’s one-round communication pattern and privacy assumptions, providing a simple and efficient drop-in improvement to existing DC pipelines.