An inexact accelerated proximal gradient method for large scale linearly constrained convex SDP

The accelerated proximal gradient (APG) method, first proposed by Nesterov, and later refined by Beck and Teboulle, and studied in a unifying manner by Tseng has proven to be highly efficient in solving some classes of large scale structured convex optimization (possibly nonsmooth) problems, including nuclear norm minimization problems in matrix completion and $l_1$ minimization problems in compressed sensing. The method has superior worst-case iteration complexity over the classical projected gradient method, and usually has good practical performance on problems with appropriate structures. In this paper, we extend the APG method to the inexact setting where the subproblem in each iteration is only solved approximately, and show that it enjoys the same worst-case iteration complexity as the exact counterpart if the subproblems are progressively solved to sufficient accuracy. We apply our inexact APG method to solve large scale convex quadratic semidefinite programming (QSDP) problems of the form: $\min\{ \frac{1}{2}\inprod{x}{\mathcal{Q}(x)} + \inprod{c}{x}\mid \mathcal{A}(x) = b, x\succeq 0\}$, where $\mathcal{Q},\mathcal{A}$ are given linear maps and $b,c$ are given data. The subproblem in each iteration is solved by a semismooth Newton-CG (SSNCG) method with warm-start using the iterate from the previous iteration. Our APG-SSNCG method is demonstrated to be efficient for QSDP problems whose positive semidefinite linear maps $\mathcal{Q}$ are highly ill-conditioned or rank deficient.

Citation

SIAM J. Optimization, to appear.

Article

Download

View PDF