In this paper, we develop a stochastic set-valued optimization (SVO) framework tailored
for robust machine learning. In the SVO setting, each decision variable is mapped to a
set of objective values, and optimality is defined via set relations. We focus on SVO problems
with hyperbox sets, which can be reformulated as multi-objective optimization (MOO)
problems with finitely many objectives and serve as a foundation for representing or approximating
more general mapped sets. Two special cases of hyperbox-valued optimization
(HVO) are interval-valued (IVO) and rectangle-valued (RVO) optimization. We construct
stochastic IVO/RVO formulations that incorporate subquantiles and superquantiles into the
objective functions of the MOO reformulations, providing a new characterization for subquantiles.
These formulations provide interpretable trade-offs by capturing both lower- and
upper-tail behaviors of loss distributions, thereby going beyond standard empirical risk minimization
and classical robust models. To solve the resulting multi-objective problems, we
adopt stochastic multi-gradient algorithms and select a Pareto knee solution. In numerical
experiments, the proposed algorithms with this selection strategy exhibit improved robustness
and reduced variability across test replications under distributional shift compared with
empirical risk minimization, while maintaining competitive accuracy.
Citation
T. Giovannelli, J. Tan, and L. N. Vicente, Stochastic set-valued optimization and its application to robust learning, ISE Technical Report 26T-005, Lehigh University