A randomized Mirror-Prox method for solving structured large-scale matrix saddle-point problems

In this paper, we derive a randomized version of the Mirror-Prox method for solving some structured matrix saddle-point problems, such as the maximal eigenvalue minimization problem. Deterministic first-order schemes, such as Nesterov’s Smoothing Techniques or standard Mirror-Prox methods, require the exact computation of a matrix exponential at every iteration, limiting the size of the problems they can solve. Our method allows us to use stochastic approximations of matrix exponentials. We prove that our randomized scheme decreases significantly the complexity of its deterministic counterpart for large-scale matrix saddle-point problems. Numerical experiments illustrate and confirm our theoretical results.

Citation

Technical report, ETH Zurich / Georgia Institute of Technology, December 2011

Article

Download

View PDF