A Geometric Perspective on Polynomially Solvable Convex Maximization

Convex maximization arises in many applications but is generally NP-hard, even for low-rank objectives. This paper introduces a set of broadly applicable conditions that certify when such problems are polynomially solvable. Our main condition is a new property of the feasible set, which we term co-monotonicity. We show that this property holds for two important … Read more

AI for Enhancing Operations Research of Agriculture and Energy

This paper surveys optimization problems arising in agriculture, energy systems, and water-energy coordination from an operations research perspective. These problems are commonly formulated as integer nonlinear programs, mixed-integer nonlinear programs, or combinatorial set optimization models, characterized by nonlinear physical constraints, discrete decisions, and intertemporal coupling. Such structures pose significant computational challenges in large-scale and repeated-solution … Read more

Global optimization of low-rank polynomials

This work considers polynomial optimization problems where the objective admits a low-rank canonical polyadic tensor decomposition. We introduce LRPOP (low-rank polynomial optimization), a new hierarchy of semidefinite programming relaxations for which the size of the semidefinite blocks is determined by the canonical polyadic rank rather than the number of variables. As a result, LRPOP can … Read more

Solving the Heilbronn Triangle Problem using Global Optimization Methods

We study the Heilbronn triangle problem, which involves placing \(n\) points in the unit square such that the minimum area of any triangle formed by these points is maximized. A straightforward maximin formulation of this problem is highly non-linear and non-convex due to the existence of bilinear terms and absolute value equations. We propose two … Read more

Inexact subgradient algorithm with a non-asymptotic convergence guarantee for copositive programming problems

In this paper, we propose a subgradient algorithm with a non-asymptotic convergence guarantee to solve copositive programming problems. The subproblem to be solved at each iteration is a standard quadratic programming problem, which is NP-hard in general. However, the proposed algorithm allows this subproblem to be solved inexactly. For a prescribed accuracy $\epsilon > 0$ … Read more

Extreme Strong Branching for QCQPs

For mixed-integer programs (MIPs), strong branching is a highly effective variable selection method to reduce the number of nodes in the branch-and-bound algorithm. Extending it to nonlinear problems is conceptually simple but practically limited. Branching on a binary variable fixes the variable to 0 or 1, whereas branching on a continuous variable requires an additional … Read more

Convexification of a Separable Function over a Polyhedral Ground Set

In this paper, we study the set \(\mathcal{S}^\kappa = \{ (x,y)\in\mathcal{G}\times\mathbb{R}^n : y_j = x_j^\kappa , j=1,\dots,n\}\), where \(\kappa > 1\) and the ground set \(\mathcal{G}\) is a nonempty polytope contained in \( [0,1]^n\). This nonconvex set is closely related to separable standard quadratic programming and appears as a substructure in potential-based network flow problems … Read more

Global Multi-Objective Simulation Optimization: Error Bounds and Convergence Rates

Consider the context of solving a multi-objective simulation optimization problem with one or more continuous objective functions to global optimality on a compact feasible set. For a simple algorithm that consists of selecting a finite set of feasible points using a space-filling design, expending the same number of simulation replications at each point to estimate … Read more

A Riemannian AdaGrad-Norm Method

We propose a manifold AdaGrad-Norm method (\textsc{MAdaGrad}), which extends the norm version of AdaGrad (AdaGrad-Norm) to Riemannian optimization. In contrast to line-search schemes, which may require several exponential map computations per iteration, \textsc{MAdaGrad} requires only one. Assuming the objective function $f$ has Lipschitz continuous Riemannian gradient, we show that the method requires at most $\mathcal{O}(\varepsilon^{-2})$ … Read more