Accessible Complexity Bounds for Restarted PDHG on Linear Programs with a Unique Optimizer
\(\)
The restarted primal-dual hybrid gradient method (rPDHG) has recently emerged as an important tool for solving large-scale linear programs (LPs). For LPs with unique optima, we present an iteration bound of \(O\left(\kappa\Phi\cdot\ln\left(\frac{\kappa\Phi\|w^*\|}{\varepsilon}\right)\right)\), where \(\varepsilon\) is the target tolerance, \(\kappa\) is the standard matrix condition number, \(\|w^*\|\) is the norm of the optimal solution, and \(\Phi\) is a geometric condition number of the LP sublevel sets. This iteration bound is “accessible” in the sense that computing it is typically no more difficult than computing the optimal solution itself. Indeed, we present a closed-form and tractably computable expression for \(\Phi\). This enables an analysis of the “two-stage performance” of rPDHG: we show that the first stage identifies the optimal basis in \({O}\left(\kappa\Phi\cdot\ln(\kappa\Phi)\right)\) iterations, and the second stage computes an \(\varepsilon\)-optimal solution in \(O\left(\|B^{-1}\|\|A\|\cdot\ln\left(\frac{\xi}{\varepsilon}\right)\right)\) additional iterations, where \(A\) is the constraint matrix, \(B\) is the optimal basis and \(\xi\) is the smallest nonzero in the optimal solution. Furthermore, computational tests mostly confirm the tightness of our iteration bounds. We also show a reciprocal relation between the iteration bound and stability under data perturbation, which is also equivalent to (i) proximity to multiple optima, and (ii) the LP sharpness of the instance. Finally, we analyze an “optimized” primal-dual reweighting which offers some intuition concerning the step-size heuristics used in practice.