Why there is no need to use a big-M in linear bilevel optimization: A computational study of two ready-to-use approaches

Linear bilevel optimization problems have gained increasing attention both in theory as well as in practical applications of Operations Research (OR) during the last years and decades. The latter is mainly due to the ability of this class of problems to model hierarchical decision processes. However, this ability makes bilevel problems also very hard to … Read more

Minimization of L1 over L2 for sparse signal recovery with convergence guarantee

The ratio of the $L_1$ and $L_2$ norms, denoted by $L_1/L_2$, becomes attractive due to its scale-invariant property when approximating the $L_0$ norm to promote sparsity. In this paper, we incorporate the $L_1/L_2$ formalism into an unconstrained model in order to deal with both noiseless and noisy observations. To design an efficient algorithm, we derive … Read more

Partial Lasserre relaxation for sparse Max-Cut

A common approach to solve or find bounds of polynomial optimization problems like Max-Cut is to use the first level of the Lasserre hierarchy. Higher levels of the Lasserre hierarchy provide tighter bounds, but solving these relaxations is usually computationally intractable. We propose to strengthen the first level relaxation for sparse Max-Cut problems using constraints … Read more

Spectral relaxations and branching strategies for global optimization of mixed-integer quadratic programs

We consider the global optimization of nonconvex quadratic programs and mixed-integer quadratic programs. We present a family of convex quadratic relaxations which are derived by convexifying nonconvex quadratic functions through perturbations of the quadratic matrix. We investigate the theoretical properties of these quadratic relaxations and show that they are equivalent to some particular semidefinite programs. … Read more

A General Framework for Optimal Data-Driven Optimization

We propose a statistically optimal approach to construct data-driven decisions for stochastic optimization problems. Fundamentally, a data-driven decision is simply a function that maps the available training data to a feasible action. It can always be expressed as the minimizer of a surrogate optimization model constructed from the data. The quality of a data-driven decision … Read more

On Distributionally Robust Multistage Convex Optimization: New Algorithms and Complexity Analysis

This paper presents a novel algorithmic study and complexity analysis of distributionally robust multistage convex optimization (DR-MCO). We propose a new class of algorithms for solving DR-MCO, namely a sequential dual dynamic programming (Seq-DDP) algorithm and its nonsequential version (NDDP). The new algorithms generalize and strengthen existing DDP-type algorithms by introducing the technique of regularization … Read more

New algorithms for hierarchical optimisation in kidney exchange programmes

Kidney exchange programmes (KEPs) across the world help match donors and recipients to identify kidney transplantations. Almost all KEPs use a hierarchical set of objectives to determine an optimal set of transplants to perform, and integer linear programming is often used to find such optimal matchings. In this work, we identify the barriers in existing … Read more

Matching Algorithms and Complexity Results for Constrained Mixed-Integer Optimal Control with Switching Costs

We extend recent work on the performance of the combinatorial integral approximation decomposition approach for Mixed-Integer Optimal Control Problems (MIOCPs) in the presence of combinatorial constraints or switching costs on an equidistant grid. For the time discretized problem, we reformulate the emerging rounding problem in the decomposition approach as a matching problem on a bipartite … Read more

A Noise-Tolerant Quasi-Newton Method for Unconstrained Optimization

This paper describes an extension of the BFGS and L-BFGS methods for the minimization of a nonlinear function subject to errors. This work is motivated by applications that contain computational noise, employ low-precision arithmetic, or are subject to statistical noise. The classical BFGS and L-BFGS methods can fail in such circumstances because the updating procedure … Read more

Convergence of Proximal Gradient Algorithm in the Presence of Adjoint Mismatch

We consider the proximal gradient algorithm for solving penalized least-squares minimization problems arising in data science. This first-order algorithm is attractive due to its flexibility and minimal memory requirements allowing to tackle large-scale minimization problems involving non-smooth penalties. However, for problems such as X-ray computed tomography, the applicability of the algorithm is dominated by the … Read more