Discrete Optimal Transport with Independent Marginals is #P-Hard

We study the computational complexity of the optimal transport problem that evaluates the Wasserstein distance between the distributions of two K-dimensional discrete random vectors. The best known algorithms for this problem run in polynomial time in the maximum of the number of atoms of the two distributions. However, if the components of either random vector … Read more

Stochastic trust-region and direct-search methods: A weak tail bound condition and reduced sample sizing

Using tail bounds, we introduce a new probabilistic condition for function estimation in stochastic derivative-free optimization which leads to a reduction in the number of samples and eases algorithmic analyses. Moreover, we develop simple stochastic direct-search and trust-region methods for the optimization of a potentially non-smooth function whose values can only be estimated via stochastic … Read more

Optimization-based Scenario Reduction for Data-Driven Two-stage Stochastic Optimization

We propose a novel, optimization-based method that takes into account the objective and problem structure for reducing the number of scenarios, m, needed for solving two-stage stochastic optimization problems. We develop a corresponding convex optimization-based algorithm, and show that as the number of scenarios increase, the proposed method recovers the SAA solution. We report computational … Read more

Bounds for Multistage Mixed-Integer Distributionally Robust Optimization

Multistage mixed-integer distributionally robust optimization (DRO) forms a class of extremely challenging problems since their size grows exponentially with the number of stages. One way to model the uncertainty in multistage DRO is by creating sets of conditional distributions (the so-called conditional ambiguity sets) on a finite scenario tree and requiring that such distributions remain … Read more

A barrier Lagrangian dual method for multi-stage stochastic convex semidefinite optimization

In this paper, we present a polynomial-time barrier algorithm for solving multi-stage stochastic convex semidefinite optimization based on the Lagrangian dual method which relaxes the nonanticipativity constraints. We show that the barrier Lagrangian dual functions for our setting form self-concordant families with respect to barrier parameters. We also use the barrier function method to improve … Read more

Distributionally Robust Modeling of Optimal Control

The aim of this paper is to formulate several questions related to distributionally robust Stochastic Optimal Control modeling. As an example, the distributionally robust counterpart of the classical inventory model is discussed in details. Finite and infinite horizon stationary settings are considered. Article Download View Distributionally Robust Modeling of Optimal Control

Chance constrained nonlinear optimization with skewed distributions and dependent rows

This paper discusses chance constrained optimization problems where the constraints are linear to the random variables but nonlinear to the decision variables. For the individual nonlinear chance constraint, we derive tractable reformulation under finite Gaussian mixture distributions and design tight approximation under the generalized hyperbolic distribution. For the joint nonlinear chance constraint, we study several … Read more

Graph topology invariant gradient and sampling complexity for decentralized and stochastic optimization

One fundamental problem in decentralized multi-agent optimization is the trade-off between gradient/sampling complexity and communication complexity. We propose new algorithms whose gradient and sampling complexities are graph topology invariant, while their communication complexities remain optimal. For convex smooth deterministic problems, we propose a primal dual sliding (PDS) algorithm that computes an $\epsilon$-solution with $O((\tilde{L}/\epsilon)^{1/2})$ gradient … Read more

Worst-Case Complexity of an SQP Method for Nonlinear Equality Constrained Stochastic Optimization

A worst-case complexity bound is proved for a sequential quadratic optimization (commonly known as SQP) algorithm that has been designed for solving optimization problems involving a stochastic objective function and deterministic nonlinear equality constraints. Barring additional terms that arise due to the adaptivity of the monotonically nonincreasing merit parameter sequence, the proved complexity bound is … Read more

A Stochastic Bregman Primal-Dual Splitting Algorithm for Composite Optimization

We study a stochastic first order primal-dual method for solving convex-concave saddle point problems over real reflexive Banach spaces using Bregman divergences and relative smoothness assumptions, in which we allow for stochastic error in the computation of gradient terms within the algorithm. We show ergodic convergence in expectation of the Lagrangian optimality gap with a … Read more