Sample Average Approximation for Stochastic Nonconvex Mixed Integer Nonlinear Programming via Outer Approximation

Stochastic mixed-integer nonlinear programming (MINLP) is a very challenging type of problem. Although there have been recent advances in developing decomposition algorithms to solve stochastic MINLPs, none of the existing algorithms can address stochastic MINLPs with continuous distributions. We propose a sample average approximation-based outer approximation algorithm (SAAOA) that can address nonconvex two-stage stochastic programs … Read more

Multi-stage robust optimization problems: A sampled scenario tree based approach

In this paper, we consider multi-stage robust convex optimization problems of the minimax type. We assume that the total uncertainty set is the cartesian product of stagewise compact uncertainty sets and approximate the given problem by a sampled subproblem. Instead of looking for the worst case among the infinite and typically uncountable set of uncertain … Read more

Duality and sensitivity analysis of multistage linear stochastic programs

In this paper we investigate the dual of a Multistage Stochastic Linear Program (MSLP) to study two related questions for this class of problems. The first of these questions is the study of the optimal value of the problem as a function of the involved parameters. For this sensitivity analysis problem, we provide formulas for … Read more

A Fully Stochastic Second-Order Trust Region Method

A stochastic second-order trust region method is proposed, which can be viewed as a second-order extension of the trust-region-ish (TRish) algorithm proposed by Curtis et al. [INFORMS J. Optim. 1(3) 200–220, 2019]. In each iteration, a search direction is computed by (approximately) solving a trust region subproblem defined by stochastic gradient and Hessian estimates. The … Read more

Bridging Bayesian and Minimax Mean Square Error Estimation via Wasserstein Distributionally Robust Optimization

We introduce a distributionally robust minimium mean square error estimation model with a Wasserstein ambiguity set to recover an unknown signal from a noisy observation. The proposed model can be viewed as a zero-sum game between a statistician choosing an estimator—that is, a measurable function of the observation—and a fictitious adversary choosing a prior—that is, … Read more

Convergence Analysis and a DC Approximation Method for Data-driven Mathematical Programs with Distributionally Robust Chance Constraints

In this paper, we consider the convergence analysis of data-driven mathematical programs with distributionally robust chance constraints (MPDRCC) under weaker conditions without continuity assumption of distributionally robust probability functions. Moreover, combining with the data-driven approximation, we propose a DC approximation method to MPDRCC without some special tractable structures. We also give the convergence analysis of … Read more

Adaptive Sampling Quasi-Newton Methods for Derivative-Free Stochastic Optimization

We consider stochastic zero-order optimization problems, which arise in settings from simulation optimization to reinforcement learning. We propose an adaptive sampling quasi-Newton method where we estimate the gradients of a stochastic function using finite differences within a common random number framework. We employ modified versions of a norm test and an inner product quasi-Newton test … Read more

Coupled Learning Enabled Stochastic Programming with Endogenous Uncertainty

Predictive analytics, empowered by machine learning, is usually followed by decision-making problems in prescriptive analytics. We extend the above sequential prediction-optimization paradigm to a coupled scheme such that the prediction model can guide the decision problem to produce coordinated decisions yielding higher levels of performance. Speci fically, for stochastic programming (SP) models with latently decision-dependent uncertainty, … Read more

Optimistic Distributionally Robust Optimization for Nonparametric Likelihood Approximation

The likelihood function is a fundamental component in Bayesian statistics. However, evaluating the likelihood of an observation is computationally intractable in many applications. In this paper, we propose a non-parametric approximation of the likelihood that identifies a probability measure which lies in the neighborhood of the nominal measure and that maximizes the probability of observing … Read more

Calculating Optimistic Likelihoods Using (Geodesically) Convex Optimization

A fundamental problem arising in many areas of machine learning is the evaluation of the likelihood of a given observation under different nominal distributions. Frequently, these nominal distributions are themselves estimated from data, which makes them susceptible to estimation errors. We thus propose to replace each nominal distribution with an ambiguity set containing all distributions … Read more