There’s No Free Lunch: On the Hardness of Choosing a Correct Big-M in Bilevel Optimization

One of the most frequently used approaches to solve linear bilevel optimization problems consists in replacing the lower-level problem with its Karush-Kuhn-Tucker (KKT) conditions and by reformulating the KKT complementarity conditions using techniques from mixed-integer linear optimization. The latter step requires to determine some big-M constant in order to bound the lower level’s dual feasible … Read more

An Enhanced Logical Benders Approach for Linear Programs with Complementarity

This work extends the ones of Hu et al. (2008) and Bai et al. (2013) of a logical Benders approach for globally solving Linear Programs with Complementarity Constraints. By interpreting the logical Benders method as a reversed branch-and-bound method, where the whole exploration procedure starts from the leaf nodes in an enumeration tree, we provide … Read more

Γ-Robust Linear Complementarity Problems

Complementarity problems are often used to compute equilibria made up of specifically coordinated solutions of different optimization problems. Specific examples are game-theoretic settings like the bimatrix game or energy market models like for electricity or natural gas. While optimization under uncertainties is rather well-developed, the field of equilibrium models represented by complementarity problems under uncertainty … Read more

Active-set Newton methods and partial smoothness

Diverse optimization algorithms correctly identify, in finite time, intrinsic constraints that must be active at optimality. Analogous behavior extends beyond optimization to systems involving partly smooth operators, and in particular to variational inequalities over partly smooth sets. As in classical nonlinear programming, such active-set structure underlies the design of accelerated local algorithms of Newton type. … Read more

An extragradient method for solving variational inequalities without monotonicity

A new extragradient projection method is devised in this paper, which does not obviously require generalized monotonicity and assumes only that the so-called dual variational inequality has a solution in order to ensure its global convergence. In particular, it applies to quasimonotone variational inequality having a nontrivial solution. ArticleDownload View PDF

An algorithmic characterization of P-matricity II: adjustments, refinements, and validation

The paper “An algorithmic characterization of P-matricity” (SIAM Journal on Matrix Analysis and Applications, 34:3 (2013) 904–916, by the same authors as here) implicitly assumes that the iterates generated by the Newton-min algorithm for solving a linear complementarity problem of dimension n, which reads 0 ⩽ x ⊥ (M x + q) ⩾ 0, are … Read more

Projective Splitting with Forward Steps only Requires Continuity

A recent innovation in projective splitting algorithms for monotone operator inclusions has been the development of a procedure using two forward steps instead of the customary proximal steps for operators that are Lipschitz continuous. This paper shows that the Lipschitz assumption is unnecessary when the forward steps are performed in finite-dimensional spaces: a backtracking linesearch … Read more

Deep Neural Network Structures Solving Variational Inequalities

We propose a novel theoretical framework to investigate deep neural networks using the formalism of proximal fixed point methods for solving variational inequalities. We first show that almost all activation functions used in neural networks are actually proximity operators. This leads to an algorithmic model alternating firmly nonexpansive and linear operators. We derive new results … Read more

Conditional Extragradient Algorithms for Solving Constrained Variational Inequalities

In this paper, we generalize the classical extragradient algorithm for solving variational inequality problems by utilizing non-null normal vectors of the feasible set. In particular, conceptual algorithms are proposed with two different linesearches. We then establish convergence results for these algorithms under mild assumptions. Our study suggests that non-null normal vectors may significantly improve convergence … Read more