Many applications arising in a variety of fields can be well illustrated by the task of recovering the low-rank and sparse components of a given matrix. Recently, it is discovered that this NP-hard task can be well accomplished, both theoretically and numerically, via heuristically solving a convex relaxation problem where the widely-acknowledged nuclear norm and l1 norm are utilized to induce low-rank and sparsity. In the literature, it is conventionally assumed that all entries of the matrix to be recovered are exactly known (via observation). To capture even more applications, this paper studies the recovery task in more general settings: only a fraction of entries of the matrix can be observed and the observation is corrupted by both impulsive and Gaussian noise. the resulted model falls into the applicable scope of the classical augmented Lagrangian method. Moreover, the separable structure of the new model enables us to solve the involved subproblems more efficiently by splitting the augmented Lagrangian function. Hence, some implementable numerical algorithms are developed in the spirits of the well-known alternating direction method and the parallel splitting augmented Lagrangian method. Some preliminary numerical experiments verify that these augmented-Lagrangian-based algorithms are easily-implementable and surprisingly-efficient for tackling the new recovery model.
Article
View Recovering low-rank and sparse components of matrices from incomplete and noisy observations