Barrier Methods Based on Jordan-Hilbert Algebras for Stochastic Optimization in Spin Factors

We present decomposition logarithmic-barrier interior-point methods based on unital Jordan-Hilbert algebras for infinite-dimensional stochastic second-order cone programming problems in spin factors. The results show that the iteration complexity of the proposed algorithms is independent on the choice of Hilbert spaces from which the underlying spin factors are formed, and so it coincides with the best … Read more

A Homogeneous Predictor-Corrector Algorithm for Stochastic Nonsymmetric Convex Conic Optimization With Discrete Support

We consider a stochastic convex optimization problem over nonsymmetric cones with discrete support. This class of optimization problems has not been studied yet. By using a logarithmically homogeneous self-concordant barrier function, we present a homogeneous predictor-corrector interior-point algorithm for solving stochastic nonsymmetric conic optimization problems. We also derive an iteration bound for the proposed algorithm. … Read more

RSOME in Python: An Open-Source Package for Robust Stochastic Optimization Made Easy

We develop a Python package called RSOME for modeling a wide spectrum of robust and distributionally robust optimization problems. RSOME serves as a modeling platform for formulating various optimization problems subject to distributional ambiguity in a highly readable and mathematically intuitive manner. Compared with the MATLAB version, RSOME in Python is more versatile and well … Read more

Optimal Eco-Routing for Hybrid Vehicles with Mechanistic/Data-Driven Powertrain Model Embedded

Hybrid Electric Vehicles (HEVs) are regarded as an important (transition) element of sustainable transportation. Exploiting the full potential of HEVs requires (i) a suitable route selection and (ii) suitable power management, i.e., deciding on the split between combustion engine and electric motor usage as well as the mode of the electric motor, i.e., driving or … Read more

LSOS: Line-search Second-Order Stochastic optimization methods for nonconvex finite sums

We develop a line-search second-order algorithmic framework for minimizing finite sums. We do not make any convexity assumptions, but require the terms of the sum to be continuously differentiable and have Lipschitz-continuous gradients. The methods fitting into this framework combine line searches and suitably decaying step lengths. A key issue is a two-step sampling at … Read more

Inductive Linearization for Binary Quadratic Programs with Linear Constraints: A Computational Study

The computational performance of inductive linearizations for binary quadratic programs in combination with a mixed-integer programming solver is investigated for several combinatorial optimization problems and established benchmark instances. Apparently, a few of these are solved to optimality for the first time. Citationpreprint (no internal series / number): University of Bonn, Germany June 11, 2021ArticleDownload View … Read more

Distributionally Robust Optimization with Markovian Data

We study a stochastic program where the probability distribution of the uncertain problem parameters is unknown and only indirectly observed via finitely many correlated samples generated by an unknown Markov chain with d states. We propose a data-driven distributionally robust optimization model to estimate the problem’s objective function and optimal solution. By leveraging results from … Read more

MatQapNB User Guide: A branch-and-bound program for QAPs in Matlab with the Newton-Bracketing method

MatQapNB is a MATLAB toolbox that implements a parallel branch-and-bound method using NewtBracket (the Newton bracketing method [4]) for its lower bounding procedure. It can solve small to medium scale Quadratic Assignment Problem (QAP) instances with dimension up to 30. MatQapNB was used in the numerical experiments on QAPs in the recent article “Solving challenging … Read more

New complexity results and algorithms for min-max-min robust combinatorial optimization

In this work we investigate the min-max-min robust optimization problem applied to combinatorial problems with uncertain cost-vectors which are contained in a convex uncertainty set. The idea of the approach is to calculate a set of k feasible solutions which are worst-case optimal if in each possible scenario the best of the k solutions would … Read more

Practical Large-Scale Linear Programming using Primal-Dual Hybrid Gradient

We present PDLP, a practical first-order method for linear programming (LP) that can solve to the high levels of accuracy that are expected in traditional LP applications. In addition, it can scale to very large problems because its core operation is matrix-vector multiplications. PDLP is derived by applying the primal-dual hybrid gradient (PDHG) method, popularized … Read more