Final Exam Scheduling at Bucknell University: A Case Study and Open-Source Tool

Problem Definition: Final exam scheduling is a common but challenging optimization problem. At Bucknell University, a small liberal arts institution, the problem is particularly complex and has historically required the Registrar’s Office to spend months manually designing an exam schedule each semester. Methodology: We worked in close collaboration with the Registrar’s Office. First, we created … Read more

What is the Best Way to Do Something? A Discreet Tour of Discrete Optimization

In mathematical optimization, we want to find the best possible solution for a decision-making problem. Curiously, these problems are harder to solve if they have discrete decisions. Imagine that you would like to buy chocolate: you can buy no chocolate or one chocolate bar, but typically you cannot buy just half of a bar. Now … Read more

Optimization over Trained (and Sparse) Neural Networks: A Surrogate within a Surrogate

We can approximate a constraint or an objective function that is uncertain or nonlinear with a neural network that we embed in the optimization model. This approach, which is known as constraint learning, faces the challenge that optimization models with neural network surrogates are harder to solve. Such difficulties have motivated studies on model reformulation, … Read more

When Deep Learning Meets Polyhedral Theory: A Survey

In the past decade, deep learning became the prevalent methodology for predictive modeling thanks to the remarkable accuracy of deep neural networks in tasks such as computer vision and natural language processing. Meanwhile, the structure of neural networks converged back to simpler representations based on piecewise constant and piecewise linear functions such as the Rectified … Read more

The Combinatorial Brain Surgeon: Pruning Weights That Cancel One Another in Neural Networks

Neural networks tend to achieve better accuracy with training if they are larger — even if the resulting models are overparameterized. Nevertheless, carefully removing such excess parameters before, during, or after training may also produce models with similar or even improved accuracy. In many cases, that can be curiously achieved by heuristics as simple as … Read more

Scaling Up Exact Neural Network Compression by ReLU Stability

We can compress a neural network while exactly preserving its underlying functionality with respect to a given input domain if some of its neurons are stable. However, current approaches to determine the stability of neurons in networks with Rectified Linear Unit (ReLU) activations require solving or finding a good approximation to multiple discrete optimization problems. … Read more

Lossless Compression of Deep Neural Networks

Deep neural networks have been successful in many predictive modeling tasks, such as image and language recognition, where large neural networks are often used to obtain good accuracy. Consequently, it is challenging to deploy these networks under limited computational resources, such as in mobile devices. In this work, we introduce an algorithm that removes units … Read more

Template-based Minor Embedding for Adiabatic Quantum Optimization

Quantum Annealing (QA) can be used to quickly obtain near-optimal solutions for Quadratic Unconstrained Binary Optimization (QUBO) problems. In QA hardware, each decision variable of a QUBO should be mapped to one or more adjacent qubits in such a way that pairs of variables defining a quadratic term in the objective function are mapped to … Read more

Empirical Bounds on Linear Regions of Deep Rectifier Networks

One form of characterizing the expressiveness of a piecewise linear neural network is by the number of linear regions, or pieces, of the function modeled. We have observed substantial progress in this topic through lower and upper bounds on the maximum number of linear regions and a counting procedure. However, these bounds only account for … Read more