Online Convex Optimization with Heavy Tails: Old Algorithms, New Regrets, and Applications

In Online Convex Optimization (OCO), when the stochastic gradient has a finite variance, many algorithms provably work and guarantee a sublinear regret. However, limited results are known if the gradient estimate has a heavy tail, i.e., the stochastic gradient only admits a finite \(\mathsf{p}\)-th central moment for some \(\mathsf{p}\in\left(1,2\right]\). Motivated by it, this work examines … Read more

An Online-Learning Approach to Inverse Optimization

In this paper, we demonstrate how to learn the objective function of a decision-maker while only observing the problem input data and the decision-maker’s corresponding decisions over multiple rounds. Our approach is based on online learning and works for linear objectives over arbitrary feasible sets for which we have a linear optimization oracle. As such, … Read more