Applying random coordinate descent in a probability maximization scheme

Gradient computation of multivariate distribution functions calls for considerable effort. A standard procedure is component-wise computation, hence coordinate descent is an attractive choice. This paper deals with constrained convex problems. We apply random coordinate descent in an approximation scheme that is an inexact cutting-plane method from a dual viewpoint. We present convergence proofs and a … Read more

Faster Convergence of Stochastic Accelerated Gradient Descent under Interpolation

We prove new convergence rates for a generalized version of stochastic Nesterov acceleration under interpolation conditions. Unlike previous analyses, our approach accelerates any stochastic gradient method which makes sufficient progress in expectation. The proof, which proceeds using the estimating sequences framework, applies to both convex and strongly convex functions and is easily specialized to accelerated … Read more

Similarity-based Decomposition Algorithm for Two-stage Stochastic Scheduling

This paper presents a novel decomposition method for two-stage stochastic mixed-integer optimization problems. The algorithm builds upon the idea of similarity between finite sample sets to measure how similar the first-stage decisions are among the uncertainty realization scenarios. Using such a Similarity Index, the non-anticipative constraints are removed from the problem formulation so that the … Read more

On the Out-of-Sample Performance of Stochastic Dynamic Programming and Model Predictive Control

Sample average approximation-based stochastic dynamic programming (SDP) and model predictive control (MPC) are two different methods for approaching multistage stochastic optimization. In this paper we investigate the conditions under which SDP may be outperformed by MPC. We show that, depending on the presence of concavity or convexity, MPC can be interpreted as solving a mean-constrained … Read more

A Unified Approach for Maximizing Continuous $\gamma$-weakly DR-submodular Functions

This paper presents a unified approach for maximizing continuous \(\gamma\)-weakly DR-submodular functions that encompasses a range of settings and oracle access types. Our approach includes a Frank-Wolfe type offline algorithm for both monotone and non-monotone functions, with different restrictions on the convex feasible region. We consider settings where the oracle provides access to either the … Read more

Managing Distributional Ambiguity in Stochastic Optimization through a Statistical Upper Bound Framework

Stochastic optimization is often hampered by distributional ambiguity, where critical probability distributions are poorly characterized or unknown. Addressing this challenge, we introduce a new framework that targets the minimization of a statistical upper bound for the expected value of uncertain objectives, facilitating more statistically robust decision-making. Central to our approach is the Average Percentile Upper … Read more

Uncertainty Quantification for Multiobjective Stochastic Convex Quadratic Programs

A multiobjective stochastic convex quadratic program (MOSCQP) is a multiobjective optimization problem with convex quadratic objectives that are observed with stochastic error. MOSCQP is a useful problem formulation arising, for example, in model calibration and nonlinear system identification when a single regression model combines data from multiple distinct sources, resulting in a multiobjective least squares … Read more

Problem-Parameter-Free Decentralized Nonconvex Stochastic Optimization

Existing decentralized algorithms usually require knowledge of problem parameters for updating local iterates. For example, the hyperparameters (such as learning rate) usually require the knowledge of Lipschitz constant of the global gradient or topological information of the communication networks, which are usually not accessible in practice. In this paper, we propose D-NASA, the first algorithm … Read more

Variance Reduction and Low Sample Complexity in Stochastic Optimization via Proximal Point Method

This paper proposes a stochastic proximal point method to solve a stochastic convex composite optimization problem. High probability results in stochastic optimization typically hinge on restrictive assumptions on the stochastic gradient noise, for example, sub-Gaussian distributions. Assuming only weak conditions such as bounded variance of the stochastic gradient, this paper establishes a low sample complexity … Read more

Extending the Reach of First-Order Algorithms for Nonconvex Min-Max Problems with Cohypomonotonicity

We focus on constrained, \(L\)-smooth, nonconvex-nonconcave min-max problems either satisfying \(\rho\)-cohypomonotonicity or admitting a solution to the \(\rho\)-weakly Minty Variational Inequality (MVI), where larger values of the parameter \(\rho>0\) correspond to a greater degree of nonconvexity. These problem classes include examples in two player reinforcement learning, interaction dominant min-max problems, and certain synthetic test problems on … Read more