A Double-oracle, Logic-based Benders decomposition approach to solve the K-adaptability problem

We propose a novel approach to solve K-adaptability problems with convex objective and constraints and integer first-stage decisions. A logic-based Benders decomposition is applied to handle the first-stage decisions in a master problem, thus the sub-problem becomes a min-max-min robust combinatorial optimization problem that is solved via a double-oracle algorithm that iteratively generates adverse scenarios … Read more

COIL: A Deep Architecture for Column Generation

Column generation is a popular method to solve large-scale linear programs with an exponential number of variables. Several important applications, such as the vehicle routing problem, rely on this technique in order to be solved. However, in practice, column generation methods suffer from slow convergence (i.e. they require too many iterations). Stabilization techniques, which carefully … Read more

Generalized polarity and weakest constraint qualifications in multiobjective optimization

In G. Haeser, A. Ramos, Constraint Qualifications for Karush-Kuhn-Tucker Conditions in Multiobjective Optimization, JOTA, Vol.~187 (2020), 469-487, a generalization of the normal cone from single objective to multiobjective optimization is introduced, along with a weakest constraint qualification such that any local weak Pareto optimal point is a weak Kuhn-Tucker point. We extend this approach to … Read more

A note on quadratic constraints with indicator variables: Convex hull description and perspective relaxation

In this paper, we study the mixed-integer nonlinear set given by a separable quadratic constraint on continuous variables, where each continuous variable is controlled by an additional indicator. This set occurs pervasively in optimization problems with uncertainty and in machine learning. We show that optimization over this set is NP-hard. Despite this negative result, we … Read more

Convergence rate analysis of the gradient descent-ascent method for convex-concave saddle-point problems

In this paper, we study the gradient descent-ascent method for convex-concave saddle-point problems. We derive a new non-asymptotic global convergence rate in terms of distance to the solution set by using the semidefinite programming performance estimation method. The given convergence rate incorporates most parameters of the problem and it is exact for a large class … Read more

A Simplified Convergence Theory for Byzantine Resilient Stochastic Gradient Descent

In distributed learning, a central server trains a model according to updates provided by nodes holding local data samples. In the presence of one or more malicious servers sending incorrect information (a Byzantine adversary), standard algorithms for model training such as stochastic gradient descent (SGD) fail to converge. In this paper, we present a simplified … Read more

Relaxations and Duality for Multiobjective Integer Programming

Multiobjective integer programs (MOIPs) simultaneously optimize multiple objective func- tions over a set of linear constraints and integer variables. In this paper, we present continuous, convex hull and Lagrangian relaxations for MOIPs and examine the relationship among them. The convex hull relaxation is tight at supported solutions, i.e., those that can be derived via a … Read more

Using Taylor-Approximated Gradients to Improve the Frank-Wolfe Method for Empirical Risk Minimization

The Frank-Wolfe method has become increasingly useful in statistical and machine learning applications, due to the structure-inducing properties of the iterates, and especially in settings where linear minimization over the feasible set is more computationally efficient than projection. In the setting of Empirical Risk Minimization — one of the fundamental optimization problems in statistical and … Read more

On the Relation Between Affinely Adjustable Robust Linear Complementarity and Mixed-Integer Linear Feasibility Problems

We consider adjustable robust linear complementarity problems and extend the results of Biefel et al.~(2022) towards convex and compact uncertainty sets. Moreover, for the case of polyhedral uncertainty sets, we prove that computing an adjustable robust solution of a given linear complementarity problem is equivalent to solving a properly chosen mixed-integer linear feasibility problem. ArticleDownload … Read more