A Fully Sparse Implementation of a Primal-Dual Interior-Point Potential Reduction Method for Semidefinite Programming

In this paper, we show a way to exploit sparsity in the problem data in a primal-dual potential reduction method for solving a class of semidefinite programs. When the problem data is sparse, the dual variable is also sparse, but the primal one is not. To avoid working with the dense primal variable, we apply Fukuda et al.'s theory of partial matrix completion and work with partial matrices instead. The other place in the algorithm where sparsity should be exploited is in the computation of the search direction, where the gradient and the Hessian-matrix product of the primal and dual barrier functions must be computed in every iteration. By using an idea from automatic differentiation in backward mode, both the gradient and the Hessian-matrix product can be computed in time proportional to the time needed to compute the barrier functions of sparse variables itself. Moreover, the high space complexity that is normally associated with the use of automatic differentiation in backward mode can be avoided in this case. In addition, we suggest a technique to efficiently compute the determinant of the positive definite matrix completion that is required to compute primal search directions. The method of obtaining one of the primal search directions that minimizes the number of the evaluations of the determinant of the positive definite completion is also proposed. We then implement the algorithm and test it on the problem of finding the maximum cut of a graph.

Citation

Cornell University, Ithaca, NY 14853, 12/2004. Also available at http://arxiv.org/ps/cs.NA/0412009

Article

Download

View A Fully Sparse Implementation of a Primal-Dual Interior-Point Potential Reduction Method for Semidefinite Programming