Mathematical Models and Approximate Solution Approaches for the Stochastic Bin Packing Problem

We consider the (single-stage) stochastic bin packing problem (SBPP) which is based on a given list of items the sizes of which are represented by stochastically independent random variables. The SBPP requires to determine the minimum number of unit capacity bins needed to pack all the items, such that for each bin the probability of … Read more

Distributionally Robust Two-Stage Stochastic Programming

Distributionally robust optimization is a popular modeling paradigm in which the underlying distribution of the random parameters in a stochastic optimization model is unknown. Therefore, hedging against a range of distributions, properly characterized in an ambiguity set, is of interest. We study two-stage stochastic programs with linear recourse in the context of distributional ambiguity, and … Read more

Are Weaker Stationarity Concepts of Stochastic MPCC Problems Significant in Absence of SMPCC-LICQ?

In this article, we study weak stationarity conditions (A- and C-) for a particular class of degenerate stochastic mathematical programming problems with complementarity constraints (SMPCC, for short). Importance of the weak stationarity concepts in absence of SMPCC-LICQ are presented through toy problems in which the point of local or global minimum are weak stationary points … Read more

Distributionally Robust Facility Location with Bimodal Random Demand

In this paper, we consider a decision-maker who wants to determine a subset of locations from a given set of candidate sites to open facilities and accordingly assign customer demand to these open facilities. Unlike classical facility location settings, we focus on a new setting where customer demand is bimodal, i.e., display, or belong to, … Read more

Finite-Sample Guarantees for Wasserstein Distributionally Robust Optimization: Breaking the Curse of Dimensionality

Wasserstein distributionally robust optimization (DRO) aims to find robust and generalizable solutions by hedging against data perturbations in Wasserstein distance. Despite its recent empirical success in operations research and machine learning, existing performance guarantees for generic loss functions are either overly conservative due to the curse of dimensionality, or plausible only in large sample asymptotics. … Read more

Stochastic Multi-level Composition Optimization Algorithms with Level-Independent Convergence Rates

In this paper, we study smooth stochastic multi-level composition optimization problems, where the objective function is a nested composition of $T$ functions. We assume access to noisy evaluations of the functions and their gradients, through a stochastic first-order oracle. For solving this class of problems, we propose two algorithms using moving-average stochastic estimates, and analyze … Read more

An improved randomized algorithm with noise level tuning for large-scale noisy unconstrained DFO problems

In this paper, a new randomized solver (called VRDFON) for noisy unconstrained derivative-free optimization (DFO) problems is discussed. Complexity result in the presence of noise for nonconvex functions is studied. Two effective ingredients of VRDFON are an improved derivative-free line search algorithm with many heuristic enhancements and quadratic models in adaptively determined subspaces. Numerical results … Read more

On the Impact of Deep Learning-based Time-series Forecasts on Multistage Stochastic Programming Policies

Multistage stochastic programming provides a modeling framework for sequential decision-making problems that involve uncertainty. One typically overlooked aspect of this methodology is how uncertainty is incorporated into modeling. Traditionally, statistical forecasting techniques with simple forms, e.g., (first-order) autoregressive time-series models, are used to extract scenarios to be added to optimization models to represent the uncertain … Read more

Multistage stochastic programs with the entropic risk measure

Over the last two decades, coherent risk measures have been well studied as a principled, axiomatic way to measure the risk of a random variable. Because of this axiomatic approach, coherent risk measures have a number of attractive features for computation, and they have been integrated into a variety of stochastic programming algorithms, including stochastic … Read more

Optimization for Supervised Machine Learning: Randomized Algorithms for Data and Parameters

Many key problems in machine learning and data science are routinely modeled as optimization problems and solved via optimization algorithms. With the increase of the volume of data and the size and complexity of the statistical models used to formulate these often ill-conditioned optimization tasks, there is a need for new efficient algorithms able to … Read more