A tight iteration-complexity upper bound for the MTY predictor-corrector algorithm via redundant Klee-Minty cubes

It is an open question whether there is an interior-point algorithm for linear optimization problems with a lower iteration-complexity than the classical bound $\mathcal{O}(\sqrt{n} \log(\frac{\mu_1}{\mu_0}))$. This paper provides a negative answer to that question for a variant of the Mizuno-Todd-Ye predictor-corrector algorithm. In fact, we prove that for any $\epsilon >0$, there is a redundant … Read more

A Strongly Polynomial Simplex Method for Totally Unimodular LP

Kitahara and Mizuno get new bounds for the number of distinct solutions generated by the simplex method for linear programming (LP). In this paper, we combine results of Kitahara and Mizuno and Tardos’s strongly polynomial algorithm, and propose an algorithm for solving a standard form LP problem. The algorithm solves polynomial number of artificial LP … Read more

An elementary proof of linear programming optimality conditions without using Farkas’ lemma

Although it is easy to prove the sufficient conditions for optimality of a linear program, the necessary conditions pose a pedagogical challenge. A widespread practice in deriving the necessary conditions is to invoke Farkas’ lemma, but proofs of Farkas’ lemma typically involve “nonlinear” topics such as separating hyperplanes between disjoint convex sets, or else more … Read more

Playing with Duality: An Overview of Recent Primal-Dual Approaches for Solving Large-Scale Optimization Problems

Optimization methods are at the core of many problems in signal/image processing, computer vision, and machine learning. For a long time, it has been recognized that looking at the dual of an optimization problem may drastically simplify its solution. Deriving efficient strategies which jointly brings into play the primal and the dual problems is however … Read more

Calmness of linear programs under perturbations of all data: characterization and modulus

This paper provides operative point-based formulas (only involving the nominal data, and not data in a neighborhood) for computing or estimating the calmness modulus of the optimal set (argmin) mapping in linear optimization under uniqueness of nominal optimal solutions. Our analysis is developed in two different parametric settings. First, in the framework of canonical perturbations … Read more

Active-set prediction for interior point methods using controlled perturbations

We propose the use of controlled perturbations to address the challenging question of optimal active-set prediction for interior point methods. Namely, in the context of linear programming, we consider perturbing the inequality constraints/bounds so as to enlarge the feasible set. We show that if the perturbations are chosen appropriately, the solution of the original problem … Read more

An LP-based Algorithm to Test Copositivity

A symmetric matrix is called copositive if it generates a quadratic form taking no negative values over the nonnegative orthant, and the linear optimization problem over the set of copositive matrices is called the copositive programming problem. Recently, many studies have been done on the copositive programming problem (see, for example, \cite{aDUR10, aBOMZE12}). Among others, … Read more

An improved and simplified full-Newton step O(n) infeasible interior-point method for Linear Optimization

We present an improved version of an infeasible interior-point method for linear optimization published in 2006. In the earlier version each iteration consisted of one so-called infeasibility step and a few – at most three – centering steps. In this paper each iteration consists of only a infeasibility step, whereas the iteration bound improves the … Read more

A strongly polynomial algorithm for linear optimization problems having 0-1 optimal solutions

We present a strongly polynomial algorithm for linear optimization problems of the form min{cx|Ax = b, x >= 0} having 0-1 vectors among their optimal solutions. The algorithm runs in time O(n^4*max\{m,log n}), where n is the number of variables and m is the number of equations. The algorithm also constructs necessary and sufficient optimality … Read more

Equivalence and Strong Equivalence between Sparsest and Least $\ell_1hBcNorm Nonnegative Solutions of Linear Systems and Their Application

Many practical problems can be formulated as $\ell_0$-minimization problems with nonnegativity constraints, which seek the sparsest nonnegative solutions to underdetermined linear systems. Recent study indicates that $\ell_1$-minimization is efficient for solving some classes of $\ell_0$-minimization problems. From a mathematical point of view, however, the understanding of the relationship between $\ell_0$- and $\ell_1$-minimization remains incomplete. In … Read more