Weighted LCPs and interior point systems for copositive linear transformations on Euclidean Jordan algebras

In the setting of a Euclidean Jordan algebra V with symmetric cone V_+, corresponding to a linear transformation M, a `weight vector’ w in V_+, and a q in V, we consider the weighted linear complementarity problem wLCP(M,w,q) and (when w is in the interior of V_+) the interior point system IPS(M,w,q). When M is … Read more

Quasi-Newton approaches to Interior Point Methods for quadratic problems

Interior Point Methods (IPM) rely on the Newton method for solving systems of nonlinear equations. Solving the linear systems which arise from this approach is the most computationally expensive task of an interior point iteration. If, due to problem’s inner structure, there are special techniques for efficiently solving linear systems, IPMs enjoy fast convergence and … Read more

Interior Point Methods and Preconditioning for PDE-Constrained Optimization Problems Involving Sparsity Terms

PDE-constrained optimization problems with control or state constraints are challenging from an analytical as well as numerical perspective. The combination of these constraints with a sparsity-promoting L1 term within the objective function requires sophisticated optimization methods. We propose the use of an Interior Point scheme applied to a smoothed reformulation of the discretized problem, and … Read more

An ADMM-Based Interior-Point Method for Large-Scale Linear Programming

In this paper, we propose a new framework to implement interior point method (IPM) in order to solve some very large scale linear programs (LP). Traditional IPMs typically use Newton’s method to approximately solve a subproblem that aims to minimize a log-barrier penalty function at each iteration. Due its connection to Newton’s method, IPM is … Read more

Stable interior point method for convex quadratic programming with strict error bounds

We present a short step interior point method for solving a class of nonlinear programming problems with quadratic objective function. Convex quadratic programming problems can be reformulated as problems in this class. The method is shown to have weak polynomial time complexity. A complete proof of the numerical stability of the method is provided. No … Read more

Primal-Dual Interior-Point Methods for Domain-Driven Formulations: Algorithms

We study infeasible-start primal-dual interior-point methods for convex optimization problems given in a typically natural form we denote as Domain-Driven formulation. Our algorithms extend many advantages of primal-dual interior-point techniques available for conic formulations, such as the current best complexity bounds, and more robust certificates of approximate optimality, unboundedness, and infeasibility, to Domain-Driven formulations. The … Read more

On self-concordant barriers for generalized power cones

In the study of interior-point methods for nonsymmetric conic optimization and their applications, Nesterov introduced the power cone, together with a 4-self-concordant barrier for it. In his PhD thesis, Chares found an improved 3-self-concordant barrier for the power cone. In addition, he introduced the generalized power cone, and conjectured a nearly optimal self-concordant barrier for … Read more

Long-Step Path-Following Algorithm for Solving Symmetric Programming Problems with Nonlinear Objective Functions

We describe a long-step path-following algorithm for a class of symmetric programming problems with nonlinear convex objective functions. The complexity estimates similar to the case of a linear-quadratic objective function are established. The results of numerical experiments for the class of optimization problems involving quantum entropy are presented. Citation Preprint, University of Notre Dame, December … Read more

Maintaining a Basis Matrix in the Linear Programming Interior Point Method

To precondition the normal equation system from the linear programming (LP) interior point method, basis preconditioners choose a basis matrix dependent on column scaling factors. Two criteria for choosing the basis matrix are compared which yield a maximum volume or maximum weight basis. Finding a maximum volume basis requires a combinatorial effort, but it gives … Read more

Worst-case convergence analysis of gradient and Newton methods through semidefinite programming performance estimation

We provide new tools for worst-case performance analysis of the gradient (or steepest descent) method of Cauchy for smooth strongly convex functions, and Newton’s method for self-concordant functions. The analysis uses semidefinite programming performance estimation, as pioneered by Drori en Teboulle [Mathematical Programming, 145(1-2):451–482, 2014], and extends recent performance estimation results for the method of … Read more