An easily computable upper bound on the Hoffman constant for homogeneous inequality systems

Let $A\in \mathbb{R}^{m\times n}\setminus \{0\}$ and $P:=\{x:Ax\le 0\}$. This paper provides a procedure to compute an upper bound on the following {\em homogeneous Hoffman constant} \[ H_0(A) := \sup_{u\in \mathbb{R}^n \setminus P} \frac{\text{dist}(u,P)}{\text{dist}(Au, \mathbb{R}^m_-)}. \] In sharp contrast to the intractability of computing more general Hoffman constants, the procedure described in this paper is entirely … Read more

Bound Propagation for Linear Inequalities Revisited

In 2011, Korovin and Voronkov (Proceedings of the 23rd International Conference on Automated Deduction, vol. 6803 of Lecture Notes in Computer Science, pp. 369-383) proposed a method based on bound propagation for solving systems of linear inequalities. In this paper, an alternate description of their algorithm which also incorporates an addition that returns a certificate … Read more

An Oblivious Ellipsoid Algorithm for Solving a System of (In)Feasible Linear Inequalities

The ellipsoid algorithm is a fundamental algorithm for computing a solution to the system of m linear inequalities in n variables (P) when its set of solutions has positive volume. However, when (P) is infeasible, the ellipsoid algorithm has no mechanism for proving that (P) is infeasible. This is in contrast to the other two … Read more

Can linear superiorization be useful for linear optimization problems?

Linear superiorization considers linear programming problems but instead of attempting to solve them with linear optimization methods it employs perturbation resilient feasibility-seeking algorithms and steers them toward reduced (not necessarily minimal) target function values. The two questions that we set out to explore experimentally are (i) Does linear superiorization provide a feasible point whose linear … Read more

The Farkas Lemma Revisited

The Farkas Lemma is extended to simultaneous linear operator and polyhedral sublinear operator inequalities by Boolean valued analysis. CitationSobolev Institute of Mathematics, Novosibirsk, 630090 RussiaArticleDownload View PDF

Boundedness Theorems for the Relaxation Method

A classical theorem by Block and Levin says that certain variants of the relaxation method for solving systems of linear inequalities produce bounded sequences of intermediate solutions even when running on inconsistent input data. Using a new approach, we prove a more general version of this result and answer an old open problem of quantifying … Read more