Risk-Sensitive Variational Bayes: Formulations and Bounds

We study data-driven decision-making problems in a parametrized Bayesian framework. We adopt a risk-sensitive approach to modeling the interplay between statistical estimation of parameters and optimization, by computing a risk measure over a loss/disutility function with respect to the posterior distribution over the parameters. While this forms the standard Bayesian decision-theoretic approach, we focus on … Read more

A Scenario-Based Approach for the Vehicle Routing Problem with Roaming Delivery Locations under Stochastic Travel Times

We address a stochastic variant of the Vehicle Routing Problem with Roaming Delivery Locations. In this model, direct-to-consumer deliveries can be made in the trunk of the customer’s car, while the vehicle is parked at a location along the customer’s itinerary. The stochasticity arises from the uncertainty in travel times and the problem is formulated … Read more

A Python package for multi-stage stochastic programming

This paper presents a Python package to solve multi-stage stochastic linear programs (MSLP) and multi-stage stochastic integer programs (MSIP). Algorithms based on an extensive formulation and Stochastic Dual Dynamic (Integer) Programming (SDDP/SDDiP) method are implemented. The package is synthetically friendly and has a number of features which are not available in the competing software packages. … Read more

Stochastic Lipschitz Dynamic Programming

We propose a new algorithm for solving multistage stochastic mixed integer linear programming (MILP) problems with complete continuous recourse. In a similar way to cutting plane methods, we construct nonlinear Lipschitz cuts to build lower approximations for the non-convex cost to go functions. An example of such a class of cuts are those derived using … Read more

A Framework for Solving Chance-Constrained Linear Matrix Inequality Programs

We propose a novel partial sample average approximation (PSAA) framework to solve the two main types of chance-constrained linear matrix inequality (CCLMI) problems: CCLMI with random technology matrix, and CCLMI with random right-hand side. We propose a series of computationally tractable PSAA-based approximations for CCLMI problems, analyze their properties, and derive sufficient conditions ensuring convexity. … Read more

Scalable Preconditioning of Block-Structured Linear Algebra Systems using ADMM

We study the solution of block-structured linear algebra systems arising in optimization by using iterative solution techniques. These systems are the core computational bottleneck of many problems of interest such as parameter estimation, optimal control, network optimization, and stochastic programming. Our approach uses a Krylov solver (GMRES) that is preconditioned with an alternating method of … Read more

General risk measures for robust machine learning

A wide array of machine learning problems are formulated as the minimization of the expectation of a convex loss function on some parameter space. Since the probability distribution of the data of interest is usually unknown, it is is often estimated from training sets, which may lead to poor out-of-sample performance. In this work, we … Read more

ProxSARAH: An Efficient Algorithmic Framework for Stochastic Composite Nonconvex Optimization

We propose a new stochastic first-order algorithmic framework to solve stochastic composite nonconvex optimization problems that covers both finite-sum and expectation settings. Our algorithms rely on the SARAH estimator introduced in (Nguyen et al., 2017a) and consist of two steps: a proximal gradient and an averaging step making them different from existing nonconvex proximal-type algorithms. … Read more

Distributionally robust optimization with multiple time scales: valuation of a thermal power plant

The valuation of a real option is preferably done with the inclusion of uncertainties in the model, since the value depends on future costs and revenues, which are not perfectly known today. The usual value of the option is defined as the maximal expected (discounted) profit one may achieve under optimal management of the operation. … Read more

Normal Approximation for Stochastic Gradient Descent via Non-Asymptotic Rates of Martingale CLT

We provide non-asymptotic convergence rates of the Polyak-Ruppert averaged stochastic gradient descent (SGD) to a normal random vector for a class of twice-differentiable test functions. A crucial intermediate step is proving a non-asymptotic martingale central limit theorem (CLT), i.e., establishing the rates of convergence of a multivariate martingale difference sequence to a normal random vector, … Read more