Min-Max Optimization Is Strictly Easier Than Variational Inequalities

Classically, a mainstream approach for solving a convex-concave min-max problem is to instead solve the variational inequality problem arising from its first-order optimality conditions. Is it possible to solve min-max problems faster by bypassing this reduction? This paper initiates this investigation. We show that the answer is yes in the textbook setting of unconstrained quadratic … Read more

Extended-Krylov-subspace methods for trust-region and norm-regularization subproblems

We consider an effective new method for solving trust-region and norm-regularization problems that arise as subproblems in many optimization applications. We show that the solutions to such subproblems lie on a manifold of approximately very low rank as a function of their controlling parameters (trust-region radius or regularization weight). Based on this, we build a … Read more

A One-Extra Player Reduction of GNEPs to NEPs

It is common opinion that generalized Nash equilibrium problems are harder than Nash equilibrium problems. In this work, we show that by adding a new player, it is possible to reduce many generalized problems to standard equilibrium problems. The reduction holds for linear problems and smooth convex problems verifying a Slater-type condition. We also derive … Read more

generalizing the successive shortest path algorithm to solve the multi-objective assignment problem

We introduce a novel characterization of the efficient solutions to the Multi-objective Assignment Problem (MAP), relying solely on Network Flow theory. This result establishes that the set of efficient assignments restricted to the first $k$ origin-destination pairs in the associated bipartite graph can be derived incrementally from the efficient solutions corresponding to the first $k-1$ … Read more

Column Generation for Generalized Min-Cost Flows with Losses

The generalized flow problem deals with flows through a network with losses or gains along the arcs. Motivated by energy networks, this paper concentrates on the case with losses along cycles. Such networks can become extremely large, mostly because they are considered over large time horizons. We therefore develop a column generation approach for a … Read more

Minimal Regret Walras Equilibria for Combinatorial Markets via Duality, Integrality, and Sensitivity Gaps

We consider combinatorial multi-item markets and propose the notion of a ∆-regret Walras equilibrium, which is an allocation of items to players and a set of item prices that achieve the following goals: prices clear the market, the allocation is capacity-feasible, and the players’ strategies lead to a total regret of ∆. The regret is … Read more

Solving bilevel optimization via sequential minimax optimization

In this paper we propose a sequential minimax optimization (SMO) method for solving a class of constrained bilevel optimization problems in which the lower-level part is a possibly nonsmooth convex optimization problem, while the upper-level part is a possibly nonconvex optimization problem. Specifically, SMO applies a first-order method to solve a sequence of minimax subproblems, … Read more

A Combinatorial Branch-and-Bound Algorithm for the Capacitated Facility Location Problem under Strict Customer Preferences

This work proposes a combinatorial branch-and-bound (B&B) algorithm for the capacitated facility location problem under strict customer preferences (CFLP-SCP). We use combinatorial insights into the problem structure to do preprocessing, model branching implications, enforce feasibility or prove infeasibility in each node, select variables and derive primal and dual bounds in each node of the B&B … Read more

Inverse Optimization with Discrete Decisions

Inverse optimization (IO) has emerged as a powerful framework for analyzing prescriptive model parameters that rationalize observed or prescribed decisions. Despite the prevalence of discrete decision-making models, existing work has primarily focused on continuous and convex models, for which the corresponding IO problems are far easier to solve. This paper makes three contributions that broaden … Read more

Preconditioned subgradient method for composite optimization: overparameterization and fast convergence

Composite optimization problems involve minimizing the composition of a smooth map with a convex function. Such objectives arise in numerous data science and signal processing applications, including phase retrieval, blind deconvolution, and collaborative filtering. The subgradient method achieves local linear convergence when the composite loss is well-conditioned. However, if the smooth map is, in a … Read more