Toward Decision-Oriented Prognostics: An Integrated Estimate-Optimize Framework for Predictive Maintenance

Recent research increasingly integrates machine learning (ML) into predictive maintenance (PdM) to reduce operational and maintenance costs in data-rich operational settings. However, uncertainty due to model misspecification continues to limit widespread industrial adoption. This paper investigates a PdM framework in which sensor-driven prognostics inform decision-making under economic trade-offs within a finite decision space. We investigate … Read more

An optimal classifier for the generation of fair and calibrated synthetic data

  For agent based micro simulations, as used for example for epidemiological modeling during the COVID-19 pandemic, a realistic base population is crucial. Beyond demographic variables, health-related variables should also be included. In Germany, health-related surveys are typically small in scale, which presents several challenges when generating these variables. Specifically, strongly imbalanced classes and insufficient … Read more

Deterministic global optimization with trained neural networks: Is the envelope of single neurons worth it?

Optimization problems containing trained neural networks remain challenging due to their nonconvexity. Deterministic global optimization relies on relaxations which should be tight, quickly convergent, and cheap to evaluate. While envelopes of common activation functions have been established for several years, the envelope of an entire neuron had not. Recently, Carrasco and Mu\~{n}oz (arXiv.2410.23362, 2024) proposed … Read more

Newtonian Methods with Wolfe Linesearch in Nonsmooth Optimization and Machine Learning

This paper introduces and develops coderivative-based Newton methods with Wolfe linesearch conditions to solve various classes of problems in nonsmooth optimization and machine learning. We first propose a generalized regularized Newton method with Wolfe linesearch (GRNM-W) for unconstrained $C^{1,1}$ minimization problems (which are second-order nonsmooth) and establish global as well as local superlinear convergence of … Read more

Variable metric proximal stochastic gradient methods with additional sampling

Regularized empirical risk minimization problems arise in a variety of applications, including machine learning, signal processing, and image processing. Proximal stochastic gradient algorithms are a standard approach to solve these problems due to their low computational cost per iteration and a relatively simple implementation. This paper introduces a class of proximal stochastic gradient methods built … Read more

Multiple Kernel Learning-Aided Column-and-Constraint Generation Method

Two-stage robust optimization (two-stage RO), due to its ability to balance robustness and flexibility, has been widely used in various fields for decision-making under uncertainty. This paper proposes a multiple kernel learning (MKL)-aided column-and-constraint generation (CCG) method to address this issue in the context of data-driven decision optimization, and releases a corresponding registered Julia package, … Read more

Tuning-Free Bilevel Optimization: New Algorithms and Convergence Analysis

Bilevel optimization has recently attracted considerable attention due to its abundant applications in machine learning problems. However, existing methods rely on prior knowledge of problem parameters to determine stepsizes, resulting in significant effort in tuning stepsizes when these parameters are unknown. In this paper, we propose two novel tuning-free algorithms, D-TFBO and S-TFBO. D-TFBO employs … Read more

Forecasting Urban Traffic States with Sparse Data Using Hankel Temporal Matrix Factorization

Forecasting urban traffic states is crucial to transportation network monitoring and management, playing an important role in the decision-making process. Despite the substantial progress that has been made in developing accurate, efficient, and reliable algorithms for traffic forecasting, most existing approaches fail to handle sparsity, high-dimensionality, and nonstationarity in traffic time series and seldom consider … Read more

An Extended Validity Domain for Constraint Learning

We consider embedding a predictive machine-learning model within a prescriptive optimization problem. In this setting, called constraint learning, we study the concept of a validity domain, i.e., a constraint added to the feasible set, which keeps the optimization close to the training data, thus helping to ensure that the computed optimal solution exhibits less prediction … Read more

A graph-structured distance for mixed-variable domains with meta variables

Heterogeneous datasets emerge in various machine learning and optimization applications that feature different input sources, types or formats. Most models or methods do not natively tackle heterogeneity. Hence, such datasets are often partitioned into smaller and simpler ones, which may limit the generalizability or performance, especially if data is limited. The first main contribution of … Read more