A New Primal-Dual Interior-Point Algorithm for Second-Order Cone Optimization

We present a primal-dual interior-point algorithm for second-order conic optimization problems based on a specific class of kernel functions. This class has been investigated earlier for the case of linear optimization problems. In this paper we derive the complexity bounds $O(\sqrt{N})(\log N)\log\frac{N}{\epsilon})$ for large- and $O(\sqrt{N})\log\frac{N}{\epsilon}$ for small- update methods, respectively. Here $N$ denotes the … Read more

Large-Scale Semidefinite Programming via Saddle Point Mirror-Prox Algorithm

In this paper, we first develop “economical” representations for positive semidefinitness of well-structured sparse symmetric matrix. Using the representations, we then reformulate well-structured large-scale semidefinite problems into smooth convex-concave saddle point problems, which can be solved by a Prox-method with efficiency ${\cal O}(\epsilon^{-1})$ developed in \cite{Nem}. Some numerical implementations for large-scale Lovasz capacity and MAXCUT … Read more

Sums of Squares and Semidefinite Programming Relaxations for Polynomial Optimization Problems with Structured Sparsity

Unconstrained and inequality constrained sparse polynomial optimization problems (POPs) are considered. A correlative sparsity pattern graph is defined to find a certain sparse structure in the objective and constraint polynomials of a POP. Based on this graph, sets of supports for sums of squares (SOS) polynomials that lead to efficient SOS and semidefinite programming (SDP) … Read more

The Q Method for Symmetric Cone Programming

We extend the Q method to the symmetric cone programming. An infeasible interior point algorithm and a Newton-type algorithm are given. We give convergence results of the interior point algorithm and prove that the Newton-type algorithm is good for Citation AdvOl-Report#2004/18 McMaster University, Advanced Optimization Laboratory Hamilton, Ontario, Canada October 2004 Article Download View The … Read more

An Algorithm for Perturbed Second-order Cone Programs

The second-order cone programming problem is reformulated into several new systems of nonlinear equations. Assume the perturbation of the data is in a certain neighborhood of zero. Then starting from a solution to the old problem, the semismooth Newton’s iterates converge Q-quadratically to a solution of the perturbed problem. The algorithm is globalized. Numerical examples … Read more

The Q Method for Second-order Cone Programming

Based on the Q method for SDP, we develop the Q method for SOCP. A modified Q method is also introduced. Properties of the algorithms are discussed. Convergence proofs are given. Finally, we present numerical results. Citation AdvOl-Report#2004/15 McMaster University, Advanced Optimization Laboratory Article Download View The Q Method for Second-order Cone Programming

On the Behavior of the Homogeneous Self-Dual Model for Conic Convex Optimization

There is a natural norm associated with a starting point of the homogeneous self-dual (HSD) embedding model for conic convex optimization. In this norm two measures of the HSD model’s behavior are precisely controlled independent of the problem instance: (i) the sizes of epsilon-optimal solutions, and (ii) the maximum distance of epsilon-optimal solutions to the … Read more

A Stable Iterative Method for Linear Programming

This paper studies a new primal-dual interior/exterior-point method for linear programming. We begin with the usual perturbed primal-dual optimality equations $F_\mu(x,y,z)=0$. Under nondegeneracy assumptions, this nonlinear system is well-posed, i.e. it has a nonsingular Jacobian at optimality and is not necessarily ill-conditioned as the iterates approach optimality. We use a simple preprocessing step to eliminate … Read more

A New Conjugate Gradient Algorithm Incorporating Adaptive Ellipsoid Preconditioning

The conjugate gradient (CG) algorithm is well-known to have excellent theoretical properties for solving linear systems of equations $Ax = b$ where the $n\times n$ matrix $A$ is symmetric positive definite. However, for extremely ill-conditioned matrices the CG algorithm performs poorly in practice. In this paper, we discuss an adaptive preconditioning procedure which improves the … Read more

A New Complexity Result on Solving the Markov Decision Problem

We present a new complexity result on solving the Markov decision problem (MDP) with $n$ states and a number of actions for each state, a special class of real-number linear programs with the Leontief matrix structure. We prove that, when the discount factor $\theta$ is strictly less than $1$, the problem can be solved in … Read more