Computational Guarantees for Restarted PDHG for LP based on “Limiting Error Ratios” and LP Sharpness

In recent years, there has been growing interest in solving linear optimization problems – or
more simply “LP” – using first-order methods in order to avoid the costly matrix factorizations
of traditional methods for huge-scale LP instances. The restarted primal-dual hybrid gradient
method (PDHG) – together with some heuristic techniques – has emerged as a powerful tool
for solving huge-scale LPs. However, the theoretical understanding of the restarted PDHG and
the validation of various heuristic implementation techniques are still very limited. Existing
complexity analyses have relied on the Hoffman constant of the LP KKT system, which is known
to be overly conservative, difficult to compute (and hence difficult to empirically validate), and
fails to offer insight into instance-specific characteristics of the LP problems. These limitations
have limited the capability to discern which characteristics of LP instances lead to easy versus
difficult LP instances from the perspective of difficulty of computation. With the goal of
overcoming these limitations, in this paper we introduce and develop two purely geometry-based
condition measures for LP instances: the “limiting error ratio” and the LP sharpness. We provide
new computational guarantees for the restarted PDHG based on these two condition measures.
For the limiting error ratio, we provide a computable upper bound and show its relationship with
the data instance’s proximity to infeasibility under perturbation. For the LP sharpness, we prove
its equivalence to the stability of the LP optimal solution set under perturbation of the objective
function. We validate our computational guarantees in terms of these condition measures via
specially constructed instances. Conversely, our computational guarantees validate the practical
efficacy of certain heuristic techniques (row preconditioners and step-size tuning) that improve
computational performance in practice. Finally, we present computational experiments on LP
relaxations from the MIPLIB dataset that demonstrate the promise of various implementation
strategies.

Citation

MIT Operations Research Center Working Paper

Article

Download

View PDF