Sample Average Approximation and Model Predictive Control for Multistage Stochastic Optimization

Sample average approximation-based stochastic dynamic programming and model predictive control are two different methods of approaching multistage stochastic optimization. Model predictive control—despite a lack of theoretical backing—is often used instead of stochastic dynamic programming due to computational necessity. For settings where the stage reward is a convex function of the random terms, the stage dynamics … Read more

A Unified Approach for Maximizing Continuous $\gamma$-weakly DR-submodular Functions

\(\) This paper presents a unified approach for maximizing continuous \(\gamma\)-weakly DR-submodular functions that encompasses a range of settings and oracle access types. Our approach includes a Frank-Wolfe type offline algorithm for both monotone and non-monotone functions, with different restrictions on the convex feasible region. We consider settings where the oracle provides access to either … Read more

Managing Distributional Ambiguity in Stochastic Optimization through a Statistical Upper Bound Framework

Stochastic optimization is often hampered by distributional ambiguity, where critical probability distributions are poorly characterized or unknown. Addressing this challenge, we introduce a new framework that targets the minimization of a statistical upper bound for the expected value of uncertain objectives, facilitating more statistically robust decision-making. Central to our approach is the Average Percentile Upper … Read more

Uncertainty Quantification for Multiobjective Stochastic Convex Quadratic Programs

A multiobjective stochastic convex quadratic program (MOSCQP) is a multiobjective optimization problem with convex quadratic objectives that are observed with stochastic error. MOSCQP is a useful problem formulation arising, for example, in model calibration and nonlinear system identification when a single regression model combines data from multiple distinct sources, resulting in a multiobjective least squares … Read more

Problem-Parameter-Free Decentralized Nonconvex Stochastic Optimization

Existing decentralized algorithms usually require knowledge of problem parameters for updating local iterates. For example, the hyperparameters (such as learning rate) usually require the knowledge of Lipschitz constant of the global gradient or topological information of the communication networks, which are usually not accessible in practice. In this paper, we propose D-NASA, the first algorithm … Read more

Variance Reduction and Low Sample Complexity in Stochastic Optimization via Proximal Point Method

This paper proposes a stochastic proximal point method to solve a stochastic convex composite optimization problem. High probability results in stochastic optimization typically hinge on restrictive assumptions on the stochastic gradient noise, for example, sub-Gaussian distributions. Assuming only weak conditions such as bounded variance of the stochastic gradient, this paper establishes a low sample complexity … Read more

Extending the Reach of First-Order Algorithms for Nonconvex Min-Max Problems with Cohypomonotonicity

\(\) We focus on constrained, \(L\)-smooth, nonconvex-nonconcave min-max problems either satisfying \(\rho\)-cohypomonotonicity or admitting a solution to the \(\rho\)-weakly Minty Variational Inequality (MVI), where larger values of the parameter \(\rho>0\) correspond to a greater degree of nonconvexity. These problem classes include examples in two player reinforcement learning, interaction dominant min-max problems, and certain synthetic test problems … Read more

Distributionally Fair Stochastic Optimization using Wasserstein Distance

A traditional stochastic program under a finite population typically seeks to optimize efficiency by maximizing the expected profits or minimizing the expected costs, subject to a set of constraints. However, implementing such optimization-based decisions can have varying impacts on individuals, and when assessed using the individuals’ utility functions, these impacts may differ substantially across demographic … Read more

On Tractability, Complexity, and Mixed-Integer Convex Programming Representability of Distributionally Favorable Optimization

Distributionally Favorable Optimization (DFO) is an important framework for decision-making under uncertainty, with applications across fields such as reinforcement learning, online learning, robust statistics, chance-constrained programming, and two-stage stochastic optimization without relatively complete recourse. In contrast to the traditional Distributionally Robust Optimization (DRO) paradigm, DFO presents a unique challenge– the application of the inner infimum … Read more

Policy with guaranteed risk-adjusted performance for multistage stochastic linear problems

Risk-averse multi-stage problems and their applications are gaining interest in various fields of applications. Under convexity assumptions, the resolution of these problems can be done with trajectory following dynamic programming algorithms like Stochastic Dual Dynamic Programming (SDDP) to access a deterministic lower bound, and dual SDDP for deterministic upper bounds. In this paper, we leverage … Read more