Coherent Local Explanations for Mathematical Optimization

The surge of explainable artificial intelligence methods seeks to enhance transparency and explainability in machine learning models. At the same time, there is a growing demand for explaining decisions taken through complex algorithms used in mathematical optimization. However, current explanation methods do not take into account the structure of the underlying optimization problem, leading to … Read more

Pareto sensitivity, most-changing sub-fronts, and knee solutions

When dealing with a multi-objective optimization problem, obtaining a comprehensive representation of the Pareto front can be computationally expensive. Furthermore, identifying the most representative Pareto solutions can be difficult and sometimes ambiguous. A popular selection are the so-called Pareto knee solutions, where a small improvement in any objective leads to a large deterioration in at … Read more

Local Upper Bounds Based on Polyhedral Ordering Cones

The concept of local upper bounds plays an important role for numerical algorithms in nonconvex, integer, and mixed-integer multiobjective optimization with respect to the componentwise partial ordering, that is, where the ordering cone is the nonnegative orthant. In this paper, we answer the question on whether and how this concept can be extended to arbitrary … Read more

On Two Vectorization Schemes for Set-valued Optimization

In this paper, we investigate two known solution approaches for set-valued optimization problems, both of which are based on so-called vectorization strategies. These strategies consist of deriving a parametric family of multi-objective optimization problems whose optimal solution sets approximate those of the original set-valued problem with arbitrary accuracy in a certain sense. Thus, these approaches … Read more

SMOP: Stochastic trust region method for multi-objective problems

The problem considered is a multi-objective optimization problem, in which the goal is to find an optimal value of a vector function representing various criteria. The aim of this work is to develop an algorithm which utilizes the trust region framework with probabilistic model functions, able to cope with noisy problems, using inaccurate functions and … Read more

New Nonlinear Conjugate Gradient Methods with Guaranteed Descent for Multi-Objective Optimization

In this article, we present several examples of special nonlinear conjugate gradient directions for nonlinear (non-convex) multi-objective optimization. These directions provide a descent direction for the objectives, independent of the line-search. This way, we can provide an algorithm with simple, Armijo-like backtracking and prove convergence to first-order critical points. In contrast to other popular conjugate … Read more

Reduction from the partition problem: Dynamic lot sizing problem with polynomial complexity

In this note, we polynomially reduce an instance of the partition problem to a dynamic lot sizing problem, and show that solving the latter problem solves the former problem. By solving the dynamic program formulation of the dynamic lot sizing problem, we show that the instance of the partition problem can be solved with pseudo-polynomial … Read more

A Generalized Voting Game for Categorical Network Choices

This paper presents a game-theoretical framework for data classification and network discovery, focusing on pairwise influences in multivariate choices. The framework consists of two complementary games in which individuals, connected through a signed weighted graph, exhibit network similarity. A voting rule captures the influence of an individual’s neighbors, categorized as attractive (friend-like) or repulsive (enemy-like), … Read more

Guaranteed bounds for optimal stopping problems using kernel-based non-asymptotic uniform confidence bands

In this paper, we introduce an approach for obtaining probabilistically guaranteed upper and lower bounds on the true optimal value of stopping problems. Bounds of existing simulation-and-regression approaches, such as those based on least squares Monte Carlo and information relaxation, are stochastic in nature and therefore do not come with a finite sample guarantee. Our … Read more