In this paper, we further establish two types of DC (Difference of Convex functions) programming for $l_0$ sparse reconstruction. Our DC objective functions are specified to the difference of two norms. One is the difference of $l_1$ and $l_{\sigma_q}$ norms (DC $l_1$-$l_{\sigma_q}$ for short) where $l_{\sigma_q}$ is the $l_1$ norm of the $q$-term ($q\geq1$) best approximation of a vector. Another one is the difference of $l_1$ and $l_r$ norms with $r>1$ (DC $l_1$-$l_r$ for short). The effectiveness of such special DC programs are illustrated and analyzed. Moreover, we designed two iterative algorithms for solving DC $l_1$-$l_{\sigma_q}$ and DC $l_1$-$l_{r}$ models, of which the first one is based on proximal gradient algorithm framework and in each subproblem we develop a closed form called generalized $q$-term shrinkage operator upon the special structure of $l_{\sigma_q}$ norm, and the second one is a majorized penalty method. Both of the convergent results are presented. The computational results demonstrate that the DC approaches of $l_1$-$l_{\sigma_q}$ model and $l_1$-$l_r$ model are very efficient and competitive ways in the aspects of sparsity and accuracy compared to $l_p$ model with $0