Bilevel learning refers to machine learning problems that can be formulated as bilevel optimization models, where decisions are organized in a hierarchical structure. This paradigm has recently gained considerable attention in machine learning, as gradient-based algorithms built on the implicit function reformulation have enabled the computation of large-scale problems involving possibly millions of variables. Despite these advances, the implicit function framework relies on restrictive assumptions, notably the requirement that the lower-level problem admit a unique optimal solution for each upper-level decision. Moreover, the computation of the derivative of the lower-level optimal solution function becomes significantly more involved when the lower-level problem includes constraints. As a result, many existing bilevel learning algorithms are effective only for relatively narrow classes of problems. This paper reviews the main algorithmic ideas underlying recent progress in bilevel learning, highlighting both the key mechanisms responsible for their scalability and the limitations that arise in more general settings. We then draw connections with the broader bilevel optimization literature and discuss algorithmic techniques that may help overcome these limitations. Our aim is to bridge the gap between bilevel learning and classical bilevel optimization, thereby supporting the development of scalable methods capable of solving more general large-scale bilevel programs.
Citation
Riccardo Grazzi, Massimiliano Pontil, Saverio Salzo, and Alain Zemkoho (2026) Bilevel learning, https://optimization-online.org/2026/03/bilevel-learning/