Convergence Analysis and a DC Approximation Method for Data-driven Mathematical Programs with Distributionally Robust Chance Constraints

In this paper, we consider the convergence analysis of data-driven mathematical programs with distributionally robust chance constraints (MPDRCC) under weaker conditions without continuity assumption of distributionally robust probability functions. Moreover, combining with the data-driven approximation, we propose a DC approximation method to MPDRCC without some special tractable structures. We also give the convergence analysis of … Read more

Adaptive Sampling Quasi-Newton Methods for Derivative-Free Stochastic Optimization

We consider stochastic zero-order optimization problems, which arise in settings from simulation optimization to reinforcement learning. We propose an adaptive sampling quasi-Newton method where we estimate the gradients of a stochastic function using finite differences within a common random number framework. We employ modified versions of a norm test and an inner product quasi-Newton test … Read more

Coupled Learning Enabled Stochastic Programming with Endogenous Uncertainty

Predictive analytics, empowered by machine learning, is usually followed by decision-making problems in prescriptive analytics. We extend the above sequential prediction-optimization paradigm to a coupled scheme such that the prediction model can guide the decision problem to produce coordinated decisions yielding higher levels of performance. Speci fically, for stochastic programming (SP) models with latently decision-dependent uncertainty, … Read more

Optimistic Distributionally Robust Optimization for Nonparametric Likelihood Approximation

The likelihood function is a fundamental component in Bayesian statistics. However, evaluating the likelihood of an observation is computationally intractable in many applications. In this paper, we propose a non-parametric approximation of the likelihood that identifies a probability measure which lies in the neighborhood of the nominal measure and that maximizes the probability of observing … Read more

Calculating Optimistic Likelihoods Using (Geodesically) Convex Optimization

A fundamental problem arising in many areas of machine learning is the evaluation of the likelihood of a given observation under different nominal distributions. Frequently, these nominal distributions are themselves estimated from data, which makes them susceptible to estimation errors. We thus propose to replace each nominal distribution with an ambiguity set containing all distributions … Read more

A Data-Driven Approach for a Class of Stochastic Dynamic Optimization Problems

Dynamic stochastic optimization models provide a powerful tool to represent sequential decision-making processes. Typically, these models use statistical predictive methods to capture the structure of the underlying stochastic process without taking into consideration estimation errors and model misspecification. In this context, we propose a data-driven prescriptive analytics framework aiming to integrate the machine learning and … Read more

Stochastic Optimization Models of Insurance Mathematics

The paper overviews stochastic optimization models of insurance mathematics and methods for their solution from the point of view of stochastic programming and stochastic optimal control methodology, with vector optimality criteria. The evolution of an insurance company’s capital is considered in discrete time. The main random variables, which influence this evolution, are levels of payments, … Read more

Stochastic DC Optimal Power Flow With Reserve Saturation

We propose an optimization framework for stochastic optimal power flow with uncertain loads and renewable generator capacity. Our model follows previous work in assuming that generator outputs respond to load imbalances according to an affine control policy, but introduces a model of saturation of generator reserves by assuming that when a generator’s target level hits … Read more

Admissibility of solution estimators for stochastic optimization

We look at stochastic optimization problems through the lens of statistical decision theory. In particular, we address admissibility, in the statistical decision theory sense, of the natural sample average estimator for a stochastic optimization problem (which is also known as the empirical risk minimization (ERM) rule in learning literature). It is well known that for … Read more

Joint chance-constrained programs and the intersection of mixing sets through a submodularity lens

A particularly important substructure in modeling joint linear chance-constrained programs with random right-hand sides and finite sample space is the intersection of mixing sets with common binary variables (and possibly a knapsack constraint). In this paper, we first revisit basic mixing sets by establishing a strong and previously unrecognized connection to submodularity. In particular, we … Read more