PyEPO: A PyTorch-based End-to-End Predict-then-Optimize Library for Linear and Integer Programming

In deterministic optimization, it is typically assumed that all parameters of the problem are fixed and known. In practice, however, some parameters may be a priori unknown but can be estimated from historical data. A typical predict-then-optimize approach separates predictions and optimization into two stages. Recently, end-to-end predict-then-optimize has become an attractive alternative. In this … Read more

Learning for Spatial Branching: An Algorithm Selection Approach

The use of machine learning techniques to improve the performance of branch-and-bound optimization algorithms is a very active area in the context of mixed integer linear problems, but little has been done for non-linear optimization. To bridge this gap, we develop a learning framework for spatial branching and show its efficacy in the context of … Read more

A line search based proximal stochastic gradient algorithm with dynamical variance reduction

Many optimization problems arising from machine learning applications can be cast as the minimization of the sum of two functions: the first one typically represents the expected risk, and in practice it is replaced by the empirical risk, and the other one imposes a priori information on the solution. Since in general the first term … Read more

Shattering Inequalities for Learning Optimal Decision Trees

Recently, mixed-integer programming (MIP) techniques have been applied to learn optimal decision trees. Empirical research has shown that optimal trees typically have better out-of-sample performance than heuristic approaches such as CART. However, the underlying MIP formulations often suffer from slow runtimes, due to weak linear programming (LP) relaxations. In this paper, we first propose a … Read more

Mixed-Integer Optimization with Constraint Learning

We establish a broad methodological foundation for mixed-integer optimization with learned constraints. We propose an end-to-end pipeline for data-driven decision making in which constraints and objectives are directly learned from data using machine learning, and the trained models are embedded in an optimization formulation. We exploit the mixed-integer optimization-representability of many machine learning methods, including … Read more

A Prescriptive Machine Learning Method for Courier Scheduling on Crowdsourced Delivery Platforms

Crowdsourced delivery platforms face the unique challenge of meeting dynamic customer demand using couriers not employed by the platform. As a result, the delivery capacity of the platform is uncertain. To reduce the uncertainty, the platform can offer a reward to couriers that agree to be available to make deliveries for a specified period of … Read more

Optimization with Constraint Learning: A Framework and Survey

Many real-life optimization problems frequently contain one or more constraints or objectives for which there are no explicit formulas. If data is however available, these data can be used to learn the constraints. The benefits of this approach are clearly seen, however there is a need for this process to be carried out in a … Read more

Inexact bilevel stochastic gradient methods for constrained and unconstrained lower-level problems

Two-level stochastic optimization formulations have become instrumental in a number ofmachine learning contexts such as continual learning, neural architecture search, adversariallearning, and hyperparameter tuning. Practical stochastic bilevel optimization problemsbecome challenging in optimization or learning scenarios where the number of variables ishigh or there are constraints. In this paper, we introduce a bilevel stochastic gradient method … Read more

Regularized Step Directions in Conjugate Gradient Minimization for Machine Learning

Conjugate gradient minimization methods (CGM) and their accelerated variants are widely used in machine learning applications. We focus on the use of cubic regularization to improve the CGM direction independent of the steplength (learning rate) computation. Using Shanno’s reformulation of CGM as a memoryless BFGS method, we derive new formulas for the regularized step direction, … Read more

A New Insight on Augmented Lagrangian Method with Applications in Machine Learning

By exploiting double-penalty terms for the primal subproblem, we develop a novel relaxed augmented Lagrangian method for solving a family of convex optimization problems subject to equality or inequality constraints. This new method is then extended to solve a general multi-block separable convex optimization problem, and two related primal-dual hybrid gradient algorithms are also discussed. … Read more