In this paper, we consider two paradigms that are developed to account for uncertainty in optimization models: robust optimization (RO) and joint estimation-optimization (JEO). We examine recent developments on efficient and scalable iterative first-order methods for these problems, and show that these iterative methods can be viewed through the lens of online convex optimization (OCO). The standard OCO framework has seen much success for its ability to handle decision-making in dynamic, uncertain, and even adversarial environments. Nevertheless, our applications of interest present further flexibility in OCO via three simple modifications to standard OCO assumptions: we introduce two new concepts of weighted regret and online saddle point problems and study the possibility of making lookahead (anticipatory) decisions. We demonstrate how these new flexibilities are instrumental in exploiting structural properties of functions which results in improved convergence rates in our flexible OCO framework. In particular, relaxing the usual OCO assumption of uniform weights (non-anticipatory decisions) allows us to utilize algorithms that can significantly speed up the convergence and go beyond the established lower bounds on regret for strongly convex (smooth) loss functions. We then apply these OCO tools to RO and JEO. Our results improve the convergence guarantees of first-order methods in the context of RO and JEO under certain structural assumptions, and in certain cases, they match the best known or optimal rates in the corresponding problem classes without data uncertainty.
Citation
Technical report, Tepper School of Business, Carnegie Mellon University, August 2016
Article
View Exploiting Problem Structure in Optimization under Uncertainty via Online Convex Optimization