An Infeasible Interior-Point Algorithm with full-Newton Step for Linear Optimization

In this paper we present an infeasible interior-point algorithm for solving linear optimization problems. This algorithm is obtained by modifying the search direction in the algorithm [C. Roos, A full-Newton step ${O}(n)$ infeasible interior-point algorithm for linear optimization, 16(4) 2006, 1110-1136.]. The analysis of our algorithm is much simpler than that of the Roos’s algorithm … Read more

Full Nesterov-Todd Step Primal-Dual Interior-Point Methods for Second-Order Cone Optimization

After a brief introduction to Jordan algebras, we present a primal-dual interior-point algorithm for second-order conic optimization that uses full Nesterov-Todd-steps; no line searches are required. The number of iterations of the algorithm is $O(\sqrt{N}\log ({N}/{\varepsilon})$, where $N$ stands for the number of second-order cones in the problem formulation and $\varepsilon$ is the desired accuracy. … Read more

A New Full-Newton step (n)$ Infeasible Interior-Point Algorithm for Semidefinite Optimization

Interior-point methods for semidefinite optimization have been studied intensively, due to their polynomial complexity and practical efficiency. Recently, the second author designed an efficient primal-dual infeasible interior-point algorithm with full Newton steps for linear optimization problems. In this paper we extend the algorithm to semidefinite optimization. The algorithm constructs strictly feasible iterates for a sequence … Read more

A Full-Newton Step (n)$ Infeasible Interior-Point Algorithm for Linear Optimization

We present a full-Newton step infeasible interior-point algorithm. It is shown that at most $O(n)$ (inner) iterations suffice to reduce the duality gap and the residuals by the factor $\frac1{e}$. The bound coincides with the best known bound for infeasible interior-point algorithms. It is conjectured that further investigation will improve the above bound to $O(\sqrt{n})$. … Read more

A Full-Newton Step (n)$ Infeasible Interior-Point Algorithm for Linear Optimization

We present a full-Newton step infeasible interior-point algorithm. It is shown that at most $O(n)$ (inner) iterations suffice to reduce the duality gap and the residuals by the factor $\frac1{e}$. The bound coincides with the best known bound for infeasible interior-point algorithms. It is conjectured that further investigation will improve the above bound to $O(\sqrt{n})$. … Read more

A New Primal-Dual Interior-Point Algorithm for Second-Order Cone Optimization

We present a primal-dual interior-point algorithm for second-order conic optimization problems based on a specific class of kernel functions. This class has been investigated earlier for the case of linear optimization problems. In this paper we derive the complexity bounds $O(\sqrt{N})(\log N)\log\frac{N}{\epsilon})$ for large- and $O(\sqrt{N})\log\frac{N}{\epsilon}$ for small- update methods, respectively. Here $N$ denotes the … Read more

Primal-Dual Interior-Point Algorithms for Semidefinite Optimization Based on a Simple Kernel Function

Interior-point methods (IPMs) for semidefinite optimization (SDO) have been studied intensively, due to their polynomial complexity and practical efficiency. Recently, J.Peng et al. introduced so-called self-regular kernel (and barrier) functions and designed primal-dual interior-point algorithms based on self-regular proximity for linear optimization (LO) problems. They have also extended the approach for LO to SDO. In … Read more

A Comparative Study of New Barrier Functions for Primal-Dual Interior-Point Algorithms in Linear Optimization

Recently, so-called self-regular barrier functions for primal-dual interior-point methods (IPMs) for linear optimization were introduced. Each such barrier function is determined by its (univariate) self-regular kernel function. We introduce a new class of kernel functions. The class is defined by some simple conditions on the kernel function and its derivatives. These properties enable us to … Read more

Limiting behavior of the central path in semidefinite optimization

It was recently shown that, unlike in linear optimization, the central path in semidefinite optimization (SDO) does not converge to the analytic center of the optimal set in general. In this paper we analyze the limiting behavior of the central path to explain this unexpected phenomenon. This is done by deriving a new necessary and … Read more

On the convergence of the central path in semidefinite optimization

The central path in linear optimization always converges to the analytic center of the optimal set. This result was extended to semidefinite programming by Goldfarb and Scheinberg (SIAM J. Optim. 8: 871-886, 1998). In this paper we show that this latter result is not correct in the absence of strict complementarity. We provide a counterexample, … Read more