Multi-fidelity robust controller design with gradient sampling

Robust controllers that stabilize dynamical systems even under disturbances and noise are often formulated as solutions of nonsmooth, nonconvex optimization problems. While methods such as gradient sampling can handle the nonconvexity and nonsmoothness, the costs of evaluating the objective function may be substantial, making robust control challenging for dynamical systems with high-dimensional state spaces. In … Read more

Robust Optimization with Continuous Decision-Dependent Uncertainty with Applications in Demand Response Portfolio Management

We consider a robust optimization problem with continuous decision-dependent uncertainty (RO-CDDU), which has two new features: an uncertainty set linearly dependent on continuous decision variables and a convex piecewise-linear objective function. We prove that RO-CDDU is strongly NP-hard in general and reformulate it into an equivalent mixed-integer nonlinear program (MINLP) with a decomposable structure to … Read more

DiversiTree: A New Method to Efficiently Compute Diverse Sets of Near-Optimal Solutions to Mixed-Integer Optimization Problems

While most methods for solving mixed-integer optimization problems compute a single optimal solution, a diverse set of near-optimal solutions can often lead to improved outcomes. We present a new method for finding a set of diverse solutions by emphasizing diversity within the search for near-optimal solutions. Specifically, within a branch-and-bound framework, we investigated parameterized node … Read more

Robust Phi-Divergence MDPs

In recent years, robust Markov decision processes (MDPs) have emerged as a prominent modeling framework for dynamic decision problems affected by uncertainty. In contrast to classical MDPs, which only account for stochasticity by modeling the dynamics through a stochastic process with a known transition kernel, robust MDPs additionally account for ambiguity by optimizing in view … Read more

Adjusted Distributionally Robust Bounds on Expected Loss Functions

Optimization problems in operations and finance often include a cost that is proportional to the expected amount by which a random variable exceeds some fixed quantity, known as the expected loss function. Representation of this function often leads to computational challenges, depending on the distribution of the random variable of interest. Moreover, in practice, a … Read more

Wasserstein Logistic Regression with Mixed Features

Recent work has leveraged the popular distributionally robust optimization paradigm to combat overfitting in classical logistic regression. While the resulting classification scheme displays a promising performance in numerical experiments, it is inherently limited to numerical features. In this paper, we show that distributionally robust logistic regression with mixed (i.e., numerical and categorical) features, despite amounting … Read more

Complexity-optimal and Parameter-free First-order Methods for Finding Stationary Points of Composite Optimization Problems

This paper develops and analyzes an accelerated proximal descent method for finding stationary points of nonconvex composite optimization problems. The objective function is of the form f+h where h is a proper closed convex function, f is a differentiable function on the domain of h, and ∇f is Lipschitz continuous on the domain of h. … Read more

Neur2SP: Neural Two-stage Stochastic Programming

Stochastic programming is a powerful modeling framework for decision-making under uncertainty. In this work, we tackle two-stage stochastic programs (2SPs), the most widely applied and studied class of stochastic programming models. Solving 2SPs exactly requires evaluation of an expected value function that is computationally intractable. Additionally, having a mixed-integer linear program (MIP) or a nonlinear … Read more

A Reduced Jacobian Scheme with Full Convergence for Multicriteria Optimization

In this paper, we propose a variant of the reduced Jacobian method (RJM) introduced by El Maghri and Elboulqe in [JOTA, 179 (2018) 917–943] for multicriteria optimization under linear constraints. Motivation is that, contrarily to RJM which has only global convergence to Pareto KKT-stationary points in the classical sense of accumulation points, this new variant … Read more

Accelerating Frank-Wolfe via Averaging Step Directions

The Frank-Wolfe method is a popular method in sparse constrained optimization, due to its fast per-iteration complexity. However, the tradeoff is that its worst case global convergence is comparatively slow, and importantly, is fundamentally slower than its flow rate–that is to say, the convergence rate is throttled by discretization error. In this work, we consider … Read more