Wasserstein distributionally robust optimization (DRO) finds robust solutions by hedging against data perturbation specified by distributions in a Wasserstein ball. The robustness is linked to the regularization effect, which has been studied for continuous losses in various settings. However, existing results cannot be simply applied to the 0-1 loss, which is frequently seen in uncertainty quantification, classification, and chance-constrained programs. In this paper, we relate the Wasserstein DRO with 0-1 loss to a new regularization problem, in which the regularization term is a polynomial of the radius of the Wasserstein ball and the density around the decision boundary. Importantly, this result implies a qualitative difference between 0-1 losses and continuous losses in terms of the radius selection: for most interesting cases, it suffices to choose a radius of the order smaller than the root-n rule. Numerical experiments demonstrate the effectiveness of our implied radius selection rule.