ε-Optimality in Reverse Optimization

The purpose of this paper is to completely characterize the global approximate optimality (ε-optimality) in reverse convex optimization under the general nonconvex constraint “h(x) ≥ 0″. The main condition presented is obtained in terms of Fenchel’s ε-subdifferentials thanks to El Maghri’s ε-efficiency in difference vector optimization [J. Glob. Optim. 61 (2015) 803–812], after converting the … Read more

Managing Distributional Ambiguity in Stochastic Optimization through a Statistical Upper Bound Framework

Stochastic optimization is often hampered by distributional ambiguity, where critical probability distributions are poorly characterized or unknown. Addressing this challenge, we introduce a new framework that targets the minimization of a statistical upper bound for the expected value of uncertain objectives, facilitating more statistically robust decision-making. Central to our approach is the Average Percentile Upper … Read more

Black-box optimization for the design of a jet plate for impingement cooling

In this work, we propose a novel black-box formulation of the impingement cooling system for a nozzle in a gas turbine. Leveraging on a well-known model that correlates the design features of the cooling system with the efficiency parameters, we develop NOZZLE, a new constrained black-box optimization formulation for the jet impingement cooling design. Then … Read more

The stochastic Ravine accelerated gradient method with general extrapolation coefficients

Abstract: In a real Hilbert space domain setting, we study the convergence properties of the stochastic Ravine accelerated gradient method for convex differentiable optimization. We consider the general form of this algorithm where the extrapolation coefficients can vary with each iteration, and where the evaluation of the gradient is subject to random errors. This general … Read more

A Jacobi-type Newton method for Nash equilibrium problems with descent guarantees

A common strategy for solving an unconstrained two-player Nash equilibrium problem with continuous variables is applying Newton’s method to the system obtained by the corresponding first-order necessary optimality conditions. However, when taking into account the game dynamics, it is not clear what is the goal of each player when considering they are taking their current … Read more

Data Collaboration Analysis with Orthonormal Basis Selection and Alignment

Data Collaboration (DC) enables multiple parties to jointly train a model without exposing their private datasets. Each party privately transforms its data using a secret linear basis and shares only the resulting intermediate representations. Existing theory asserts that any target basis spanning the same subspace as the secret bases should suffice; however, empirical evidence reveals … Read more

A Polyhedral Characterization of Linearizable Quadratic Combinatorial Optimization Problems

We introduce a polyhedral framework for characterizing instances of quadratic combinatorial optimization programs (QCOPs) as being linearizable, meaning that the quadratic objective can be equivalently rewritten as linear in such a manner that preserves the objective function value at all feasible solutions. In particular, we show that an instance is linearizable if and only if … Read more

Uncertainty Quantification for Multiobjective Stochastic Convex Quadratic Programs

A multiobjective stochastic convex quadratic program (MOSCQP) is a multiobjective optimization problem with convex quadratic objectives that are observed with stochastic error. MOSCQP is a useful problem formulation arising, for example, in model calibration and nonlinear system identification when a single regression model combines data from multiple distinct sources, resulting in a multiobjective least squares … Read more