Computational Experience and the Explanatory Value of Condition Numbers for Linear Optimization

The goal of this paper is to develop some computational experience and test the practical relevance of the theory of condition numbers C(d) for linear optimization, as applied to problem instances that one might encounter in practice. We used the NETLIB suite of linear optimization problems as a test bed for condition number computation and … Read more

Block-iterative algorithms with diagonally scaled oblique projections for the linear feasibility problem

We formulate a block-iterative algorithmic scheme for the solution of systems of linear inequalities and/or equations and analyze its convergence. This study provides as special cases proofs of convergence of (i) the recently proposed Component Averaging (CAV) method of Censor, Gordon and Gordon ({\it Parallel Computing}, 27:777–808, 2001), (ii) the recently proposed Block-Iterative CAV (BICAV) … Read more

A Dynamic Large-Update Primal-Dual Interior-Point Method for Linear Optimization

Primal-dual interior-point methods (IPMs) have shown their power in solving large classes of optimization problems. However, at present there is still a gap between the practical behavior of these algorithms and their theoretical worst-case complexity results, with respect to the strategies of updating the duality gap parameter in the algorithm. The so-called small-update IPMs enjoy … Read more

Semidefinite programming vs LP relaxations for polynomial programming

We consider the global minimization of a multivariate polynomial on a semi-algebraic set \Omega defined with polynomial inequalities. We then compare two hierarchies of relaxations, namely, LP-relaxations based on products of the original constraints, in the spirit of the RLT procedure of Sherali and Adams and recent SDP (semi definite programming) relaxations introduced by the … Read more

Continuous trajectories for primal-dual potential-reduction methods

This article considers continuous trajectories of the vector fields induced by two different primal-dual potential-reduction algorithms for solving linear programming problems. For both algorithms, it is shown that the associated continuous trajectories include the central path and the duality gap converges to zero along all these trajectories. For the algorithm of Kojima, Mizuno, and Yoshise, … Read more

Polynomial interior point cutting plane methods

Polynomial cutting plane methods based on the logarithmic barrier function and on the volumetric center are surveyed. These algorithms construct a linear programming relaxation of the feasible region, find an appropriate approximate center of the region, and call a separation oracle at this approximate center to determine whether additional constraints should be added to the … Read more

A Linear Programming Approach to Semidefinite Programming Problems

Until recently, the study of interior point methods has dominated algorithmic research in semidefinite programming (SDP). From a theoretical point of view, these interior point methods offer everything one can hope for; they apply to all SDP’s, exploit second order information and offer polynomial time complexity. Still for practical applications with many constraints $k$, the … Read more

An Interior-Point Approach to Sensitivity Analysis in Degenerate Linear Programs

We consider the interior-point approach to sensitivity analysis in linear programming (LP) developed by the authors. We investigate the quality of the interior-point bounds under degeneracy. In the case of a special degeneracy, we show that these bounds have the same nice relationship with the optimal partition bounds as in the nondegenerate case. We prove … Read more

Improved linear programming bounds for antipodal spherical codes

Let $S\subset[-1,1)$. A finite set $C=\{x_i\}_{i=1}^M\subset\Re^n$ is called a {\em spherical S-code} if $||x_i||=1$ for each $i$, and $x_i^T x_j\in S$, $i\ne j$. For $S=[-1,.5]$ maximizing $M=|C|$ is commonly referred to as the {\em kissing number} problem. A well-known technique based on harmonic analysis and linear programming can be used to bound $M$. We consider … Read more

Hyper-sparsity in the revised simplex method and how to exploit it

The revised simplex method is often the method of choice when solving large scale sparse linear programming problems, particularly when a family of closely-related problems is to be solved. Each iteration of the revised simplex method requires the solution of two linear systems and a matrix vector product. For a significant number of practical problems … Read more