Given a family of linear constraints and a linear objective function one can consider whether to apply a Linear Programming (LP) algorithm or use a Linear Superiorization (LinSup) algorithm on this data. In the LP methodology one aims at finding a point that fulfills the constraints and has the minimal value of the objective function over these constraints. The Linear Superiorization approach considers the same data as linear programming problems but instead of attempting to solve those with linear optimization methods it employs perturbation resilient feasibility-seeking algorithms and steers them toward feasible points with reduced (not necessarily minimal) objective function values. Previous studies compared LP and LinSup in terms of their respective outputs and the resources they use. In this paper we investigate these two approaches in terms of their sensitivity to condition numbers of the system of linear constraints. Condition numbers are a measure for the impact of deviations in the input data on the output of a problem and, in particular, they describe the factor of error propagation when given wrong or erroneous data. Therefore, the ability of LP and LinSup to cope with increased condition numbers, thus with ill-posed problems, is an important matter to consider which was not studied until now. We investigate experimentally the advantages and disadvantages of both LP and LinSup on examplary problems of linear programming with multiple condition numbers and different problem dimensions.