Counterfactual explanations with the k-Nearest Neighborhood classifier and uncertain data

Counterfactual Analysis is a powerful tool in Explainable Machine Learning. Given a classifier and a record, one seeks the smallest perturbation necessary to have the perturbed record, called the counterfactual explanation, classified in the desired class. Feature uncertainty in data reflects the inherent variability and noise present in real-world scenarios, and therefore, there is a … Read more

Optimizing pricing strategies through learning the market structure

This study explores the integration of market structure learning into pricing strategies to maximize revenue in e-commerce and retail environments. We consider the problem of determining the revenue maximizing price of a single product in a market of heterogeneous consumers segmented by their product valuations; and analyze the pricing strategies for varying levels of prior … Read more

Adaptive Conditional Gradient Descent

Selecting an effective step-size is a fundamental challenge in first-order optimization, especially for problems with non-Euclidean geometries. This paper presents a novel adaptive step-size strategy for optimization algorithms that rely on linear minimization oracles, as used in the Conditional Gradient or non-Euclidean Normalized Steepest Descent algorithms. Using a simple heuristic to estimate a local Lipschitz … Read more

Machine Learning Algorithms for Improving Black Box Optimization Solvers

Black-box optimization (BBO) addresses problems where objectives are accessible only through costly queries without gradients or explicit structure. Classical derivative-free methods—line search, direct search, and model-based solvers such as Bayesian optimization—form the backbone of BBO, yet often struggle in high-dimensional, noisy, or mixed-integer settings. Recent advances use machine learning (ML) and reinforcement learning (RL) to … Read more

Toward Decision-Oriented Prognostics: An Integrated Estimate-Optimize Framework for Predictive Maintenance

Recent research increasingly integrates machine learning (ML) into predictive maintenance (PdM) to reduce operational and maintenance costs in data-rich operational settings. However, uncertainty due to model misspecification continues to limit widespread industrial adoption. This paper investigates a PdM framework in which sensor-driven prognostics inform decision-making under economic trade-offs within a finite decision space. We investigate … Read more

An Optimization-Based Algorithm for Fair and Calibrated Synthetic Data Generation

  For agent based micro simulations, as used for example for epidemiological modeling during the COVID-19 pandemic, a realistic base population is crucial. Beyond demographic variables, health-related variables should also be included. In Germany, health-related surveys are typically small in scale, which presents several challenges when generating these variables. Specifically, strongly imbalanced classes and insufficient … Read more

Deterministic global optimization with trained neural networks: What is the benefit of the envelope of single neurons?

Optimization problems containing trained neural networks remain challenging due to their nonconvexity. Deterministic global optimization relies on relaxations that should be tight, quickly convergent, and cheap to evaluate. While envelopes of common activation functions have been established for several years, the envelope of an entire neuron had not. Recently, Carrasco and Mu\~{n}oz (arXiv.2410.23362, 2024) proposed … Read more

Newtonian Methods with Wolfe Linesearch in Nonsmooth Optimization and Machine Learning

This paper introduces and develops coderivative-based Newton methods with Wolfe linesearch conditions to solve various classes of problems in nonsmooth optimization and machine learning. We first propose a generalized regularized Newton method with Wolfe linesearch (GRNM-W) for unconstrained $C^{1,1}$ minimization problems (which are second-order nonsmooth) and establish global as well as local superlinear convergence of … Read more

Variable metric proximal stochastic gradient methods with additional sampling

Regularized empirical risk minimization problems arise in a variety of applications, including machine learning, signal processing, and image processing. Proximal stochastic gradient algorithms are a standard approach to solve these problems due to their low computational cost per iteration and a relatively simple implementation. This paper introduces a class of proximal stochastic gradient methods built … Read more

Multiple Kernel Learning-Aided Column-and-Constraint Generation Method

Two-stage robust optimization (two-stage RO), due to its ability to balance robustness and flexibility, has been widely used in various fields for decision-making under uncertainty. This paper proposes a multiple kernel learning (MKL)-aided column-and-constraint generation (CCG) method to address this issue in the context of data-driven decision optimization, and releases a corresponding registered Julia package, … Read more