Efficient First-Order Methods for Linear Programming and Semidefinite Programming

We present a simple transformation of any linear program or semidefinite program into an equivalent convex optimization problem whose only constraints are linear equations. The objective function is defined on the whole space, making virtually all subgradient methods be immediately applicable. We observe, moreover, that the objective function is naturally “smoothed,” thereby allowing most first-order … Read more

A primal-simplex based Tardos’ algorithm

In the mid-eighties Tardos proposed a strongly polynomial algorithm for solving linear programming problems for which the size of the coefficient matrix is polynomially bounded by the dimension. Combining Orlin’s primal-based modification and Mizuno’s use of the simplex method, we introduce a modification of Tardos’ algorithm considering only the primal problem and using simplex method … Read more

On Augmentation Algorithms for Linear and Integer-Linear Programming: From Edmonds-Karp to Bland and Beyond

Motivated by Bland’s linear-programming generalization of the renowned Edmonds-Karp efficient refinement of the Ford-Fulkerson maximum-flow algorithm, we discuss three closely-related natural augmentation rules for linear and integer-linear optimization. In several nice situations, we show that polynomially-many augmentation steps suffice to reach an optimum. In particular, when using “discrete steepest-descent augmentations” (i.e., directions with the best … Read more

A tight iteration-complexity upper bound for the MTY predictor-corrector algorithm via redundant Klee-Minty cubes

It is an open question whether there is an interior-point algorithm for linear optimization problems with a lower iteration-complexity than the classical bound $\mathcal{O}(\sqrt{n} \log(\frac{\mu_1}{\mu_0}))$. This paper provides a negative answer to that question for a variant of the Mizuno-Todd-Ye predictor-corrector algorithm. In fact, we prove that for any $\epsilon >0$, there is a redundant … Read more

A Strongly Polynomial Simplex Method for Totally Unimodular LP

Kitahara and Mizuno get new bounds for the number of distinct solutions generated by the simplex method for linear programming (LP). In this paper, we combine results of Kitahara and Mizuno and Tardos’s strongly polynomial algorithm, and propose an algorithm for solving a standard form LP problem. The algorithm solves polynomial number of artificial LP … Read more

An elementary proof of linear programming optimality conditions without using Farkas’ lemma

Although it is easy to prove the sufficient conditions for optimality of a linear program, the necessary conditions pose a pedagogical challenge. A widespread practice in deriving the necessary conditions is to invoke Farkas’ lemma, but proofs of Farkas’ lemma typically involve “nonlinear” topics such as separating hyperplanes between disjoint convex sets, or else more … Read more

Calmness of linear programs under perturbations of all data: characterization and modulus

This paper provides operative point-based formulas (only involving the nominal data, and not data in a neighborhood) for computing or estimating the calmness modulus of the optimal set (argmin) mapping in linear optimization under uniqueness of nominal optimal solutions. Our analysis is developed in two different parametric settings. First, in the framework of canonical perturbations … Read more

Robust Stable Payoff Distribution in Stochastic Cooperative Games

Cooperative games with transferable utilities belong to a branch of game theory where groups of players can enter into binding agreements and form coalitions in order to jointly achieve some objectives. In a cooperative setting, one of the most important questions to address is how to establish a payoff distribution among the players in such … Read more

An improved Kalai-Kleitman bound for the diameter of a polyhedron

Kalai and Kleitman established the bound $n^{\log(d) + 2}$ for the diameter of a $d$-dimensional polyhedron with $n$ facets. Here we improve the bound slightly to $(n-d)^{\log(d)}$. CitationSchool of Operations Research and Information Engineering, Cornell University, Ithaca NY, USA, February 2014ArticleDownload View PDF