Solving a linear program via a single unconstrained minimization

This paper proposes a novel approach for solving linear programs. We reformulate a primal-dual linear program as an unconstrained minimization of a convex and twice continuously differentiable merit function. When the optimal set of the primal-dual pair is nonempty, its optimal set is equal to the optimal set of the proposed merit function. Minimizing this … Read more

Retrospective Approximation Sequential Quadratic Programming for Stochastic Optimization with General Deterministic Nonlinear Constraints

In this paper, we propose a framework based on the Retrospective Approximation (RA) paradigm to solve optimization problems with a stochastic objective function and general nonlinear deterministic constraints. This framework sequentially constructs increasingly accurate approximations of the true problems which are solved to a specified accuracy via a deterministic solver, thereby decoupling the uncertainty from … Read more

A Framework for Explainable Knowledge Generation with Expensive Sample Evaluations

Real world problems often require complex modeling and computation efforts to be effectively addressed. Relying solely on data-driven approaches without integrating physics-based models can result in limited predictive capabilities. Even advanced techniques such as deep learning may be impractical for decision-makers due to the lack of data and challenges in justifying and explaining results. In … Read more

On Multidimensonal Disjunctive Inequalities for Chance-Constrained Stochastic Problems with Finite Support

We consider mixed-integer linear chance-constrained problems for which the random vector that parameterizes the feasible region has finite support. Our key objective is to improve branch-and-bound or -cut approaches by introducing new types of valid inequalities that improve the dual bounds and, by this, the overall performance of such methods. We introduce so-called primal-dual as … Read more

Mixed-Feature Logistic Regression Robust to Distribution Shifts

Logistic regression models are widely used in the social and behavioral sciences and in high-stakes domains, due to their simplicity and interpretability properties. At the same time, such domains are permeated by distribution shifts, where the distribution generating the data changes between training and deployment. In this paper, we study a distributionally robust logistic regression … Read more

Stability Regularized Cross-Validation

We revisit the problem of ensuring strong test-set performance via cross-validation. Motivated by the generalization theory literature, we propose a nested k-fold cross- validation scheme that selects hyperparameters by minimizing a weighted sum of the usual cross-validation metric and an empirical model-stability measure. The weight on the stability term is itself chosen via a nested … Read more

A Dantzig-Wolfe Decomposition Method for Quasi-Variational Inequalities

We propose an algorithm to solve quasi-variational inequality problems, based on the Dantzig-Wolfe decomposition paradigm. Our approach solves in the subproblems variational inequalities, which is a simpler problem, while restricting quasi-variational inequalities in the master subproblems, making them generally (much) smaller in size when the original problem is large-scale. We prove global convergence of our … Read more

Multi-cut stochastic approximation methods for solving stochastic convex composite optimization

The development of a multi-cut stochastic approximation (SA) method for solving stochastic convex composite optimization (SCCO) problems has remained an open challenge. The difficulty arises from the fact that the stochastic multi-cut model, constructed as the pointwise maximum of individual stochastic linearizations, provides a biased estimate of the objective function, with the error being uncontrollable. … Read more

Stability analysis for two-level value functions and application to numerically solve a pessimistic bilevel program

Some stability results are presented for a two-level value function, which is the optimal value function of a parametric optimization problem constrained by the optimal solution set of another parameteric optimization problem. It is then shown how to use these stability results to write down (and subsequently compute) stationary points for a pessimistic bilevel optimization … Read more

Inverse Optimization via Learning Feasible Regions

We study inverse optimization (IO), where the goal is to use a parametric optimization program as the hypothesis class to infer relationships between input-decision pairs. Most of the literature focuses on learning only the objective function, as learning the constraint function (i.e., feasible regions) leads to nonconvex training programs. Motivated by this, we focus on … Read more