We consider a nonconvex optimization problem consisting of maximizing the difference of two convex functions. We present a randomized method that requires low computational effort at each iteration. The described method is a randomized coordinate descent method employed on the so-called Toland-dual problem. We prove subsequence convergence to dual stationarity points, a new notion that we introduce and shown to be tighter than the standard criticality. Almost sure rate of convergence of an optimality measure of the dual sequence is proven. We demonstrate the potential of our results on three Principal Component Analysis (PCA) models resulting in extremely simple algorithms.