Generating and Measuring Instances of Hard Semidefinite Programs, SDP

Linear Programming, LP, problems with finite optimal value have a zero duality gap and a primal-dual strictly complementary optimal solution pair. On the other hand, there exists Semidefinite Programming, SDP, problems which have a nonzero duality gap (different primal and dual optimal values; not both infinite). The duality gap is assured to be zero if … Read more

Computing robust basestock levels

This paper considers how to optimally set the basestock level for a single buffer when demand is uncertain, in a robust framework. We present a family of algorithms based on decomposition that scale well to problems with hundreds of time periods, and theoretical results on more general models. CitationCORC report TR-2005-09, Columbia University, November 2005ArticleDownload … Read more

Kantorovich’s Majorants Principle for Newton’s Method

We prove Kantorovich’s theorem on Newton’s method using a convergence analysis which makes clear, with respect to Newton’s Method, the relationship of the majorant function and the non-linear operator under consideration. This approach enable us to drop out the assumption of existence of a second root for the majorant function, still guaranteeing Q-quadratic convergence rate … Read more

A Note on Sparse SOS and SDP Relaxations for Polynomial Optimization Problems over Symmetric Cones

This short note extends the sparse SOS (sum of squares) and SDP (semidefinite programming) relaxation proposed by Waki, Kim, Kojima and Muramatsu for normal POPs (polynomial optimization problems) to POPs over symmetric cones, and establishes its theoretical convergence based on the recent convergence result by Lasserre on the sparse SOS and SDP relaxation for normal … Read more

Finding the best root node strategy for the approximation of the time-indexed bound in min-sum scheduling

We identify the best root node strategy for the approximation of the time-indexed bound in min-sum scheduling by sorting through various options that involve the primal simplex, dual simplex, and barrier methods for linear programming, the network simplex method for network flow problems, and Dantzig-Wolfe decomposition and column generation. CitationSubmitted for publication.ArticleDownload View PDF

Hyperbolic Polynomials Approach to Van der Waerden/Schrijver-Valiant like Conjectures :\

The paper describes various combinatorial and algorithmic applications of hyperbolic (multivariate) polynomials . Section 2.2 introduces a new class of polynomials , which include as hyperbolic polynomials as well volume polynomials $Vol(x_1C_1+…+x_nC_n)$ , where $C_i$ are convex compact subsets of $R^n$. This extension leads to randomized poly-time algorithm to approximate $M(C_1,…,C_n)$ (the mixed volume) within … Read more

The Effects of Adding Objectives to an Optimization Problem on the Solution Set

Suppose that for a given optimisation problem (which might be multicriteria problem or a single-criteron problem), an additional objective function is introduced. How does the the set of solutions, i.~e.\ the set of efficient points change when instead of the old problem the new multicriteria problem is considered? How does the set of properly efficient … Read more

Exact regularization of linear programs

We show that linear programs (LPs) admit regularizations that either contract the original (primal) solution set or leave it unchanged. Any regularization function that is convex and has compact level sets is allowed–differentiability is not required. This is an extension of the result first described by Mangasarian and Meyer (SIAM J. Control Optim., 17(6), pp. … Read more

Efficient Schemes for Robust IMRT Treatment Planning

We use robust optimization techniques to formulate an IMRT treatment planning problem in which the dose matrices are uncertain, due to both dose calculation errors and inter-fraction positional uncertainty of tumor and organs. When the uncertainty is taken into account, the original linear programming formulation becomes a second-order cone program. We describe a novel and … Read more

On the behavior of the conjugate-gradient method on ill-conditioned problems

We study the behavior of the conjugate-gradient method for solving a set of linear equations, where the matrix is symmetric and positive definite with one set of eigenvalues that are large and the remaining are small. We characterize the behavior of the residuals associated with the large eigenvalues throughout the iterations, and also characterize the … Read more