Confidence Regions in Wasserstein Distributionally Robust Estimation

Wasserstein distributionally robust optimization (DRO) estimators are obtained as solutions of min-max problems in which the statistician selects a parameter minimizing the worst-case loss among all probability models within a certain distance (in a Wasserstein sense) from the underlying empirical measure. While motivated by the need to identify model parameters (or) decision choices that are … Read more

Sharing the Value-at-Risk under Distributional Ambiguity

This paper considers the problem of risk sharing, where a coalition of homogeneous agents, each bearing a random cost, aggregates their costs and shares the value-at-risk of such a risky position. Due to limited distributional information in practice, the joint distribution of agents’ random costs is difficult to acquire. The coalition, being aware of the … Read more

Risk Guarantees for End-to-End Prediction and Optimization Processes

Prediction models are often employed in estimating parameters of optimization models. Despite the fact that in an \emph{end-to-end} view, the real goal is to achieve good optimization performance, the prediction performance is measured on its own. While it is usually believed that good prediction performance in estimating the parameters will result in good subsequent optimization … Read more

Risk-Averse Bi-Level Stochastic Network Interdiction Model for Cyber-Security Risk Management

Security of cyber networks is crucial; recent severe cyber-attacks have had a devastating effect on many large organizations. The attack graph, which maps the potential attack paths of a cyber network, is a popular tool for analyzing cyber system vulnerability. In this study, we propose a bi-level stochastic network interdiction model on an attack graph … Read more

Hybrid Stochastic Gradient Descent Algorithms forStochastic Nonconvex Optimization

We introduce a hybrid stochastic estimator to design stochastic gradient algorithms for solving stochastic optimization problems. Such a hybrid estimator is a convex combination of two existing biased and unbiased estimators and leads to some useful property on its variance. We limit our consideration to a hybrid SARAH-SGD for nonconvex expectation problems. However, our idea … Read more

Acceleration of SVRG and Katyusha X by Inexact Preconditioning

Empirical risk minimization is an important class of optimization problems with many popular machine learning applications, and stochastic variance reduction methods are popular choices for solving them. Among these methods, SVRG and Katyusha X (a Nesterov accelerated SVRG) achieve fast convergence without substantial memory requirement. In this paper, we propose to accelerate these two algorithms … Read more

Lagrangian relaxation based heuristics for a chance-constrained optimization model of a hybrid solar-battery storage system

Published in Journal of Global Optimization. https://doi.org/10.1007/s10898-021-01041-y We develop a stochastic optimization model for scheduling a hybrid solar-battery storage system. Solar power in excess of the promise can be used to charge the battery, while power short of the promise is met by discharging the battery. We ensure reliable operations by using a joint chance … Read more

On the Stochastic Vehicle Routing Problem with time windows, correlated travel times, and time dependency

Most state-of-the-art algorithms for the Vehicle Routing Problem, such as Branch-and- Price algorithms or meta heuristics, rely on a fast feasibility test for a given route. We devise the fi rst approach to approximately check feasibility in the Stochastic Vehicle Routing Problem with time windows, where travel times are correlated and depend on the time of … Read more

Variable smoothing for convex optimization problems using stochastic gradients

We aim to solve a structured convex optimization problem, where a nonsmooth function is composed with a linear operator. When opting for full splitting schemes, usually, primal-dual type methods are employed as they are effective and also well studied. However, under the additional assumption of Lipschitz continuity of the nonsmooth function which is composed with … Read more

Solving Chance-Constrained Problems via a Smooth Sample-Based Nonlinear Approximation

We introduce a new method for solving nonlinear continuous optimization problems with chance constraints. Our method is based on a reformulation of the probabilistic constraint as a quantile function. The quantile function is approximated via a differentiable sample average approximation. We provide theoretical statistical guarantees of the approximation, and illustrate empirically that the reformulation can … Read more