Continuous exact relaxation and alternating proximal gradient algorithm for partial sparse and partial group sparse optimization problems

In this paper, we consider a partial sparse and partial group sparse optimization problem, where the loss function is a continuously differentiable function (possibly nonconvex), and the penalty term consists of two parts associated with sparsity and group sparsity. The first part is the $\ell_0$ norm of ${\bf x}$, the second part is the $\ell_{2,0}$ norm of ${\bf y}$, i.e, $\lambda_1\|{\bf x}\|_0+\lambda_2\|{\bf y}\|_{2,0}$, where $({\bf x,y})\in \R^{n+m}$ is the decision variable. We give a continuous relaxation model of the above original problem, where the two parts of the penalty term are relaxed by Capped-$\ell_1$ of ${\bf x}$ and group Capped-$\ell_1$ of ${\bf y}$ respectively. Firstly, we define two kinds of stationary points of the relaxation model. Based on the lower bound property of d-stationary points of the relaxation model, we establish the equivalence of solutions of the original problem and the relaxation model, which provides a theoretical basis for solving the original problem via solving the relaxation problem. Secondly, we propose an alternating proximal gradient (APG) algorithm to solve the relaxation model,and prove that the whole sequence of the APG algorithm converges to a critical point under some mild conditions. Finally, numerical experiments on simulated data and multichannel image as well as comparison with some state-of-art algorithms are presented to illustrate the effectiveness and robustness of the proposed algorithm for partial sparse and partial group sparse optimization problem.

Article

Download

View Continuous exact relaxation and alternating proximal gradient algorithm for partial sparse and partial group sparse optimization problems