Minkowski Centers via Robust Optimization: Computation and Applications

Centers of convex sets are geometric objects that have received extensive attention in the mathematical and optimization literature, both from a theoretical and practical standpoint. For instance, they serve as initialization points for many algorithms such as interior-point, hit-and-run, or cutting-planes methods. First, we observe that computing a Minkowski center of a convex set can be formulated as … Read more

Mixed-Integer Optimization with Constraint Learning

We establish a broad methodological foundation for mixed-integer optimization with learned constraints. We propose an end-to-end pipeline for data-driven decision making in which constraints and objectives are directly learned from data using machine learning, and the trained models are embedded in an optimization formulation. We exploit the mixed-integer optimization-representability of many machine learning methods, including … Read more

Optimization with Constraint Learning: A Framework and Survey

Many real-life optimization problems frequently contain one or more constraints or objectives for which there are no explicit formulas. If data is however available, these data can be used to learn the constraints. The benefits of this approach are clearly seen, however there is a need for this process to be carried out in a … Read more

An extension of the Reformulation-Linearization Technique to nonlinear optimization

We introduce a novel Reformulation-Perspectification Technique (RPT) to obtain convex approximations of nonconvex continuous optimization problems. RPT consists of two steps, those are, a reformulation step and a perspectification step. The reformulation step generates redundant nonconvex constraints from pairwise multiplication of the existing constraints. The perspectification step then convexifies the nonconvex components by using perspective … Read more

Pareto Adaptive Robust Optimality via a Fourier-Motzkin Elimination Lens

We formalize the concept of Pareto Adaptive Robust Optimality (PARO) for linear Adaptive Robust Optimization (ARO) problems. A worst-case optimal solution pair of here-and-now decisions and wait-and-see decisions is PARO if it cannot be Pareto dominated by another solution, i.e., there does not exist another such pair that performs at least as good in all … Read more

A Reformulation-Linearization Technique for Optimization over Simplices

We study non-convex optimization problems over simplices. We show that for a large class of objective functions, the convex approximation obtained from the Reformulation-Linearization Technique (RLT) admits optimal solutions that exhibit a sparsity pattern. This characteristic of the optimal solutions allows us to conclude that (i) a linear matrix inequality constraint, which is often added … Read more

Robust Convex Optimization: A New Perspective That Unifies And Extends

Robust convex constraints are difficult to handle, since finding the worst-case scenario is equivalent to maximizing a convex function. In this paper, we propose a new approach to deal with such constraints that unifies approaches known in the literature and extends them in a significant way. The extension is either obtaining better solutions than the … Read more

Convex Maximization via Adjustable Robust Optimization

Maximizing a convex function over convex constraints is an NP-hard problem in general. We prove that such a problem can be reformulated as an adjustable robust optimization (ARO) problem where each adjustable variable corresponds to a unique constraint of the original problem. We use ARO techniques to obtain approximate solutions to the convex maximization problem. … Read more

Probabilistic guarantees in Robust Optimization

We develop a general methodology to derive probabilistic guarantees for solutions of robust optimization problems. Our analysis applies broadly to any convex compact uncertainty set and to any constraint affected by uncertainty in a concave manner, under minimal assumptions on the underlying stochastic process. Namely, we assume that the coordinates of the noise vector are … Read more

Tight tail probability bounds for distribution-free decision making

Chebyshev’s inequality provides an upper bound on the tail probability of a random variable based on its mean and variance. While tight, the inequality has been criticized for only being attained by pathological distributions that abuse the unboundedness of the underlying support and are not considered realistic in many applications. We provide alternative tight lower … Read more