Solution Path of Time-varying Markov Random Fields with Discrete Regularization

\(\) We study the problem of inferring sparse time-varying Markov random fields (MRFs) with different discrete and temporal regularizations on the parameters. Due to the intractability of discrete regularization, most approaches for solving this problem rely on the so-called maximum-likelihood estimation (MLE) with relaxed regularization, which neither results in ideal statistical properties nor scale to … Read more

A Multicut Approach to Compute Upper Bounds for Risk-Averse SDDP

Stochastic Dual Dynamic Programming (SDDP) is a widely used and fundamental algorithm for solving multistage stochastic optimization problems. Although SDDP has been frequently applied to solve risk-averse models with the Conditional Value-at-Risk (CVaR), it is known that the estimation of upper bounds is a methodological challenge, and many methods are computationally intensive. In practice, this … Read more

A Tutorial on Solving Single-Leader-Multi-Follower Problems using SOS1 Reformulations

In this tutorial we consider single-leader-multi-follower games in which the models of the lower-level players have polyhedral feasible sets and convex objective functions. This situation allows for classic KKT reformulations of the separate lower-level problems, which lead to challenging single-level reformulations of MPCC type. The main contribution of this tutorial is to present a ready-to-use … Read more

Duality of upper bounds in stochastic dynamic programming

For multistage stochastic programming problems with stagewise independent uncertainty, dynamic programming algorithms calculate polyhedral approximations for the value functions at each stage.  The SDDP algorithm provides piecewise linear lower bounds, in the spirit of the L-shaped algorithm, and corresponding upper bounds took a longer time to appear.  One strategy uses the primal dynamic programming recursion … Read more

Modified Monotone Policy Iteration for Interpretable Policies in Markov Decision Processes and the Impact of State Ordering Rules

Optimizing interpretable policies for Markov Decision Processes (MDPs) can be computationally intractable for large-scale MDPs, e.g., for monotone policies, the optimal interpretable policy depends on the initial state distribution, precluding standard dynamic programming techniques. Previous work has proposed Monotone Policy Iteration (MPI) to produce a feasible solution for warm starting a Mixed Integer Linear Program … Read more

A Short Proof of Tight Bounds on the Smallest Support Size of Integer Solutions to Linear Equations

\(\) In this note, we study the size of the support of solutions to linear Diophantine equations $Ax=b, ~x\in\Z^n$ where $A\in\Z^{m\times n}, b\in\Z^n$. We give an asymptotically tight upper bound on the smallest support size, parameterized by $\|A\|_\infty$ and $m$, and taken as a worst case over all $b$ such that the above system has … Read more

Global convergence of a BFGS-type algorithm for nonconvex multiobjective optimization problems

We propose a modified BFGS algorithm for multiobjective optimization problems with global convergence, even in the absence of convexity assumptions on the objective functions. Furthermore, we establish the superlinear convergence of the method under usual conditions. Our approach employs Wolfe step sizes and ensures that the Hessian approximations are updated and corrected at each iteration … Read more

Diagonal Partitioning Strategy Using Bisection of Rectangles and a Novel Sampling Scheme

In this paper we consider a global optimization problem, where the objective function is supposed to be Lipschitz-continuous with an unknown Lipschitz constant. Based on the recently introduced BIRECT (BIsection of RECTangles) algorithm, a new diagonal partitioning and sampling scheme is introduced. 0ur framework, called BIRECT-V (where V stands for vertices), combines bisection with sampling … Read more

A novel UCB-based batch strategy for Bayesian optimization

The optimization of expensive black-box functions appears in many situations. Bayesian optimization methods have been successfully applied to solve these prob- lems using well-known single-point acquisition functions. Nowadays, the develop- ments in technology allow us to perform evaluations of some of these expensive function in parallel. Therefore, there is a need for batch infill criteria … Read more

Playing Stackelberg security games in perfect formulations

Protecting critical infrastructure from intentional damage requires foreseeing the strategies of possible attackers. The problem faced by the defender of such infrastructure can be formulated as a Stackelberg security game. A defender must decide what specific targets to protect with limited resources, maximizing their expected utility (e.g., minimizing damage value) and considering that a second … Read more