On a Frank-Wolfe Approach for Abs-smooth Functions

We propose an algorithm which appears to be the first bridge between the fields of conditional gradient methods and abs-smooth optimization. Our problem setting is motivated by various applications that lead to nonsmoothness, such as $\ell_1$ regularization, phase retrieval problems, or ReLU activation in machine learning. To handle the nonsmoothness in our problem, we propose … Read more

An Algorithm-Independent Measure of Progress for Linear Constraint Propagation

Propagation of linear constraints has become a crucial sub-routine in modern Mixed-Integer Programming (MIP) solvers. In practice, iterative algorithms with tolerance-based stopping criteria are used to avoid problems with slow or infinite convergence. However, these heuristic stopping criteria can pose difficulties for fairly comparing the efficiency of different implementations of iterative propagation algorithms in a … Read more

FrankWolfe.jl: a high-performance and flexible toolbox for Frank-Wolfe algorithms and Conditional Gradients

We present FrankWolfe.jl, an open-source implementation of several popular Frank-Wolfe and Conditional Gradients variants for first-order constrained optimization. The package is designed with flexibility and high-performance in mind, allowing for easy extension and relying on few assumptions regarding the user-provided functions. It supports Julia’s unique multiple dispatch feature, and interfaces smoothly with generic linear optimization … Read more

Accelerating Domain Propagation: an Efficient GPU-Parallel Algorithm over Sparse Matrices

Fast domain propagation of linear constraints has become a crucial component of today’s best algorithms and solvers for mixed integer programming and pseudo-boolean optimization to achieve peak solving performance. Irregularities in the form of dynamic algorithmic behaviour, dependency structures, and sparsity patterns in the input data make efficient implementations of domain propagation on GPUs and, … Read more

An Online-Learning Approach to Inverse Optimization

In this paper, we demonstrate how to learn the objective function of a decision-maker while only observing the problem input data and the decision-maker’s corresponding decisions over multiple rounds. Our approach is based on online learning and works for linear objectives over arbitrary feasible sets for which we have a linear optimization oracle. As such, … Read more

Aggregation-based cutting-planes for packing and covering integer programs

In this paper, we study the strength of Chvatal-Gomory (CG) cuts and more generally aggregation cuts for packing and covering integer programs (IPs). Aggregation cuts are obtained as follows: Given an IP formulation, we first generate a single implied inequality using aggregation of the original constraints, then obtain the integer hull of the set defined … Read more

Detecting Almost Symmetries of Graphs

We present a branch-and-bound framework to solve the following problem: Given a graph G and an integer k, find a subgraph of G formed by removing no more than k edges that contains the most symmetry. We call symmetries on such a subgraph “almost symmetries” of G. We implement our branch-and-bound framework in PEBBL to … Read more

Solving MIPs via Scaling-based Augmentation

Augmentation methods for mixed-integer (linear) programs are a class of primal solution approaches in which a current iterate is augmented to a better solution or proved optimal. It is well known that the performance of these methods, i.e., number of iterations needed, can theoretically be improved by scaling methods. We extend these results by an … Read more

Polyhedral Approximation of Ellipsoidal Uncertainty Sets via Extended Formulations – a computational case study –

Robust optimization is an important technique to immunize optimization problems against data uncertainty. In the case of a linear program and an ellipsoidal uncertainty set, the robust counterpart turns into a second-order cone program. In this work, we investigate the efficiency of linearizing the second-order cone constraints of the latter. This is done using the … Read more

A NOTE ON THE EXTENSION COMPLEXITY OF THE KNAPSACK POLYTOPE

We show that there are 0-1 and unbounded knapsack polytopes with super-polynomial extension complexity. More specifically, for each n in N we exhibit 0-1 and unbounded knapsack polyhedra in dimension n with extension complexity \Omega(2^\sqrt{n}). Article Download View A NOTE ON THE EXTENSION COMPLEXITY OF THE KNAPSACK POLYTOPE