Gradient Tracking Methods for Distributed Stochastic Optimization Problems with Decision-dependent Distributions

This paper aims to seek the performative stable solution and the optimal solution of the distributed stochastic optimization problem with decision-dependent distributions, which is a finite-sum stochastic optimization problem over a network and the distribution depends on the decision variables. For the performative stable solution, we provide an algorithm, DSGTD-GD, which combines the distributed stochastic … Read more

Data-Driven Contextual Optimization with Gaussian Mixtures: Flow-Based Generalization, Robust Models, and Multistage Extensions

Contextual optimization enhances decision quality by leveraging side information to improve predictions of uncertain parameters. However, existing approaches face significant challenges when dealing with multimodal or mixtures of distributions. The inherent complexity of such structures often precludes an explicit functional relationship between the contextual information and the uncertain parameters, limiting the direct applicability of parametric … Read more

rAdam: restart Adam method to escape from local minima for bound constrained non-linear optimization problems

This paper presents a restart version of the Adaptive Moment Estimation (Adam) method for bound constrained nonlinear optimization problems. It aims to avoid getting trapped in a local minima and enable exploration the global optimum. The proposed method combines an adapted restart strategy coupling with barrier methodology to handle the bound constraints. Computational comparison with … Read more

ASPEN: An Additional Sampling Penalty Method for Finite-Sum Optimization Problems with Nonlinear Equality Constraints

We propose a novel algorithm for solving non-convex, nonlinear equality-constrained finite-sum optimization problems. The proposed algorithm incorporates an additional sampling strategy for sample size update into the well-known framework of quadratic penalty methods. Thus, depending on the problem at hand, the resulting method may exhibit a sample size strategy ranging from a mini-batch on one … Read more

A linesearch-based derivative-free method for noisy black-box problems

In this work we consider unconstrained optimization problems. The objective function is known through a zeroth order stochastic oracle that gives an estimate of the true objective function. To solve these problems, we propose a derivativefree algorithm based on extrapolation techniques. Under reasonable assumptions we are able to prove convergence properties for the proposed algorithms. … Read more

Inspection Games with Incomplete Information and Heterogeneous Resources

We study a two-player zero-sum inspection game with incomplete information, where an inspector deploys resources to maximize the expected damage value of detected illegal items hidden by an adversary across capacitated locations. Inspection and illegal resources differ in their detection capabilities and damage values. Both players face uncertainty regarding each other’s available resources, modeled via … Read more

On Relatively Smooth Optimization over Riemannian Manifolds

We study optimization over Riemannian embedded submanifolds, where the objective function is relatively smooth in the ambient Euclidean space. Such problems have broad applications but are still largely unexplored. We introduce two Riemannian first-order methods, namely the retraction-based and projection-based Riemannian Bregman gradient methods, by incorporating the Bregman distance into the update steps. The retraction-based … Read more

On the Convergence and Complexity of Proximal Gradient and Accelerated Proximal Gradient Methods under Adaptive Gradient Estimation

In this paper, we propose a proximal gradient method and an accelerated proximal gradient method for solving composite optimization problems, where the objective function is the sum of a smooth and a convex, possibly nonsmooth, function. We consider settings where the smooth component is either a finite-sum function or an expectation of a stochastic function, … Read more

Anesthesiologist Scheduling with Handoffs: A Combined Approach of Optimization and Human Factors

We present a two-stage stochastic programming model for optimizing anesthesiologist schedules, explicitly accounting for uncertainty in surgery durations and anesthesiologist handoffs. To inform model design, we conducted an online survey at our partner institution to identify key factors affecting the quality of intraoperative anesthesiologist handoffs. Insights from the survey results are incorporated into the model, … Read more

Distributionally Robust Universal Classification: Bypassing the Curse of Dimensionality

The Universal Classification (UC) problem seeks an optimal classifier from a universal policy space to minimize the expected 0-1 loss, also known as the misclassification risk. However, the conventional empirical risk minimization often leads to overfitting and poor out-of-sample performance. To address this limitation, we introduce the Distributionally Robust Universal Classification (DRUC) formulation, which incorporates … Read more