A Near Maximum Likelihood Decoding Algorithm for MIMO Systems Based on Semi-Definite Programming

In Multi-Input Multi-Output (MIMO) systems, Maximum-Likelihood (ML) decoding is equivalent to finding the closest lattice point in an N-dimensional complex space. In general, this problem is known to be NP hard. In this paper, we propose a quasi-maximum likelihood algorithm based on Semi-Definite Programming (SDP). We introduce several SDP relaxation models for MIMO systems, with … Read more

Efficient and cheap bounds for (standard) quadratic optimization

A standard quadratic optimization problem (StQP) consists in minimizing a quadratic form over a simplex. A number of problems can be transformed into a StQP, including the general quadratic problem over a polytope and the maximum clique problem in a graph. In this paper we present several polynomial-time bounds for StQP ranging from very simple … Read more

Semidefinite-Based Branch-and-Bound for Nonconvex Quadratic Programming

This paper presents a branch-and-bound algorithm for nonconvex quadratic programming, which is based on solving semidefinite relaxations at each node of the enumeration tree. The method is motivated by a recent branch-and-cut approach for the box-constrained case that employs linear relaxations of the first-order KKT conditions. We discuss certain limitations of linear relaxations when handling … Read more

An Extension of the Conjugate Directions Method With Orthogonalization to Large-Scale Problems With Bound Constraints

In our reports on GAMM-04 and ECCOMAS-04 there has been presented a new conjugate directions method for large scale unconstrained minimization problems. High efficiency of this method is ensured by employing an orthogonalization procedure: when constructing the next conjugate vector the component of the gradient is used that is orthogonal to the subspace of preceding … Read more

Compact linearization for bilinear mixed-integer problems

We present a compact linearization for a broad class of bilinear 0-1 mixed-integer problems subject to assignment constraints. We apply the linearization to three classes of problems: quadratic assignment, multiprocessor scheduling with communication delays, and graph partitioning, and show that it yields faster solution times. CitationDEI, Politecnico di Milano, Working paper, April 2005.ArticleDownload View PDF

Sensitivity analysis in convex quadratic optimization: simultaneous perturbation of the objective and right-hand-side vectors

In this paper we study the behavior of Convex Quadratic Optimization problems when variation occurs simultaneously in the right-hand side vector of the constraints and in the coefficient vector of the linear term in the objective function. It is proven that the optimal value function is piecewise-quadratic. The concepts of transition point and invariancy interval … Read more

Computational experience with an interior point algorithm for large scale contact problems

In this paper we present an interior point method for large scale Signorini elastic contact problems. We study the case of an elastic body in frictionless contact with a rigid foundation. Primal and primal-dual algorithms are developed to solve the quadratic optimization problem arising in the variational formulation. Our computational study confirms the efficiency of … Read more

A New Conjugate Gradient Algorithm Incorporating Adaptive Ellipsoid Preconditioning

The conjugate gradient (CG) algorithm is well-known to have excellent theoretical properties for solving linear systems of equations $Ax = b$ where the $n\times n$ matrix $A$ is symmetric positive definite. However, for extremely ill-conditioned matrices the CG algorithm performs poorly in practice. In this paper, we discuss an adaptive preconditioning procedure which improves the … Read more

Newton-KKT Interior-Point Methods for Indefinite Quadratic Programming

Two interior-point algorithms are proposed and analyzed, for the (local) solution of (possibly) indefinite quadratic programming problems. They are of the Newton-KKT variety in that (much like in the case of primal-dual algorithms for linear programming) search directions for the `primal´ variables and the Karush-Kuhn-Tucker (KKT) multiplier estimates are components of the Newton (or quasi-Newton) … Read more

SENSITIVITY ANALYSIS IN CONVEX QUADRATIC OPTIMIZATION: INVARIANT SUPPORT SET INTERVAL

In sensitivity analysis one wants to know how the problem and the optimal solutions change under the variation of the input data. We consider the case when variation happens in the right hand side of the constraints and/or in the linear term of the objective function. We are interested to find the range of the … Read more