K-adaptability for two-stage stochastic optimization
ArticleDownload View PDF
ArticleDownload View PDF
In this paper, we propose a subgradient algorithm with a non-asymptotic convergence guarantee to solve copositive programming problems. The subproblem to be solved at each iteration is a standard quadratic programming problem, which is NP-hard in general. However, the proposed algorithm allows this subproblem to be solved inexactly. For a prescribed accuracy $\epsilon > 0$ … Read more
For mixed-integer programs (MIPs), strong branching is a highly effective variable selection method to reduce the number of nodes in the branch-and-bound algorithm. Extending it to nonlinear problems is conceptually simple but practically limited. Branching on a binary variable fixes the variable to 0 or 1, whereas branching on a continuous variable requires an additional … Read more
In this paper, we study the set \(\mathcal{S}^\kappa = \{ (x,y)\in\mathcal{G}\times\mathbb{R}^n : y_j = x_j^\kappa , j=1,\dots,n\}\), where \(\kappa > 1\) and the ground set \(\mathcal{G}\) is a nonempty polytope contained in \( [0,1]^n\). This nonconvex set is closely related to separable standard quadratic programming and appears as a substructure in potential-based network flow problems … Read more
Consider the context of solving a multi-objective simulation optimization problem with one or more continuous objective functions to global optimality on a compact feasible set. For a simple algorithm that consists of selecting a finite set of feasible points using a space-filling design, expending the same number of simulation replications at each point to estimate … Read more
We propose a manifold AdaGrad-Norm method (\textsc{MAdaGrad}), which extends the norm version of AdaGrad (AdaGrad-Norm) to Riemannian optimization. In contrast to line-search schemes, which may require several exponential map computations per iteration, \textsc{MAdaGrad} requires only one. Assuming the objective function $f$ has Lipschitz continuous Riemannian gradient, we show that the method requires at most $\mathcal{O}(\varepsilon^{-2})$ … Read more
In this paper, we address composite optimization problems on Hadamard manifolds, where the objective function is given by the sum of a smooth term (not necessarily convex) and a convex term (not necessarily differentiable). To solve this problem, we develop a proximal gradient method defined directly on the manifold, employing a strategy that enforces monotonicity … Read more
Black-box optimization (BBO) addresses problems where objectives are accessible only through costly queries without gradients or explicit structure. Classical derivative-free methods—line search, direct search, and model-based solvers such as Bayesian optimization—form the backbone of BBO, yet often struggle in high-dimensional, noisy, or mixed-integer settings. Recent advances use machine learning (ML) and reinforcement learning (RL) to … Read more
The Bayesian paradigm offers principled tools for sequential decision-making under uncertainty, but its reliance on a probabilistic model for all parameters can hinder the incorporation of complex structural constraints. We introduce a minimalist Bayesian framework that places a prior only on the component of interest, such as the location of the optimum. Nuisance parameters are … Read more
A widely used approximation concept in multiobjective optimization is the concept of enclosures. These are unions of boxes defined by lower and upper bound sets that are used to cover optimal sets of multiobjective optimization problems in the image space. The width of an enclosure is taken as a quality measure. In this paper, we … Read more