A Python package for multi-stage stochastic programming

This paper presents a Python package to solve multi-stage stochastic linear programs (MSLP) and multi-stage stochastic integer programs (MSIP). Algorithms based on an extensive formulation and Stochastic Dual Dynamic (Integer) Programming (SDDP/SDDiP) method are implemented. The package is synthetically friendly and has a number of features which are not available in the competing software packages. … Read more

Stochastic Lipschitz Dynamic Programming

We propose a new algorithm for solving multistage stochastic mixed integer linear programming (MILP) problems with complete continuous recourse. In a similar way to cutting plane methods, we construct nonlinear Lipschitz cuts to build lower approximations for the non-convex cost to go functions. An example of such a class of cuts are those derived using … Read more

A Framework for Solving Chance-Constrained Linear Matrix Inequality Programs

We propose a novel partial sample average approximation (PSAA) framework to solve the two main types of chance-constrained linear matrix inequality (CCLMI) problems: CCLMI with random technology matrix, and CCLMI with random right-hand side. We propose a series of computationally tractable PSAA-based approximations for CCLMI problems, analyze their properties, and derive sufficient conditions ensuring convexity. … Read more

Scalable Preconditioning of Block-Structured Linear Algebra Systems using ADMM

We study the solution of block-structured linear algebra systems arising in optimization by using iterative solution techniques. These systems are the core computational bottleneck of many problems of interest such as parameter estimation, optimal control, network optimization, and stochastic programming. Our approach uses a Krylov solver (GMRES) that is preconditioned with an alternating method of … Read more

General risk measures for robust machine learning

A wide array of machine learning problems are formulated as the minimization of the expectation of a convex loss function on some parameter space. Since the probability distribution of the data of interest is usually unknown, it is is often estimated from training sets, which may lead to poor out-of-sample performance. In this work, we … Read more

ProxSARAH: An Efficient Algorithmic Framework for Stochastic Composite Nonconvex Optimization

We propose a new stochastic first-order algorithmic framework to solve stochastic composite nonconvex optimization problems that covers both finite-sum and expectation settings. Our algorithms rely on the SARAH estimator introduced in (Nguyen et al., 2017a) and consist of two steps: a proximal gradient and an averaging step making them different from existing nonconvex proximal-type algorithms. … Read more

Distributionally robust optimization with multiple time scales: valuation of a thermal power plant

The valuation of a real option is preferably done with the inclusion of uncertainties in the model, since the value depends on future costs and revenues, which are not perfectly known today. The usual value of the option is defined as the maximal expected (discounted) profit one may achieve under optimal management of the operation. … Read more

Normal Approximation for Stochastic Gradient Descent via Non-Asymptotic Rates of Martingale CLT

We provide non-asymptotic convergence rates of the Polyak-Ruppert averaged stochastic gradient descent (SGD) to a normal random vector for a class of twice-differentiable test functions. A crucial intermediate step is proving a non-asymptotic martingale central limit theorem (CLT), i.e., establishing the rates of convergence of a multivariate martingale difference sequence to a normal random vector, … Read more

Logarithmic-Barrier Decomposition Interior-Point Methods for Stochastic Linear Optimization in a Hilbert Space

Several logarithmic-barrier interior-point methods are now available for solving two-stage stochastic optimization problems with recourse in the finite-dimensional setting. However, despite the genuine need for studying such methods in general spaces, there are no infinite-dimensional analogs of these methods. Inspired by this evident gap in the literature, in this paper, we propose logarithmic-barrier decomposition-based interior-point … Read more

Identifying Effective Scenarios for Sample Average Approximation

We introduce a method to improve the tractability of the well-known Sample Average Approximation (SAA) without compromising important theoretical properties, such as convergence in probability and the consistency of an independent and identically distributed (iid) sample. We consider each scenario as a polyhedron of the mix of first-stage and second-stage decision variables. According to John’s … Read more