Convex envelopes of bounded monomials on two-variable cones

\(\) We consider an \(n\)-variate monomial function that is restricted both in value by lower and upper bounds and in domain by two homogeneous linear inequalities. Such functions are building blocks of several problems found in practical applications, and that fall under the class of Mixed Integer Nonlinear Optimization. We show that the upper envelope … Read more

Information Complexity of Mixed-integer Convex Optimization

We investigate the information complexity of mixed-integer convex optimization under different types of oracles. We establish new lower bounds for the standard first-order oracle, improving upon the previous best known lower bound. This leaves only a lower order linear term (in the dimension) as the gap between the lower and upper bounds. This is derived … Read more

The Terminator: An Integration of Inner and Outer Approximations for Solving Regular and Distributionally Robust Chance Constrained Programs via Variable Fixing

We present a novel approach aimed at enhancing the efficacy of solving both regular and distributionally robust chance constrained programs using an empirical reference distribution. In general, these programs can be reformulated as mixed-integer programs (MIPs) by introducing binary variables for each scenario, indicating whether a scenario should be satisfied. While existing methods have predominantly … Read more

Evolving Scientific Discovery by Unifying Data and Background Knowledge with AI Hilbert

The discovery of scientific formulae that parsimoniously explain natural phenomena and align with existing background theory is a key goal in science. Historically, scientists have derived natural laws by manipulating equations based on existing knowledge, forming new equations, and verifying them experimentally. In recent years, data-driven scientific discovery has emerged as a viable competitor in … Read more

Learning the Follower’s Objective Function in Sequential Bilevel Games

We consider bilevel optimization problems in which the leader has no or only partial knowledge about the objective function of the follower. The studied setting is a sequential one in which the bilevel game is played repeatedly. This allows the leader to learn the objective function of the follower over time. We focus on two … Read more

Gradient-based rho Parameter for Progressive Hedging

\(\) Watson and Woodruff  (2011) developed a heuristic for computing variable-dependent values of the penalty parameter $\rho$ from the model itself. We combine this heuristic with a gradient-based method, in order to obtain a new method for calculating $\rho$ values. We then introduce a method for iteratively computing variable-dependent $\rho$ values. This method is based … Read more

Data-Driven Counterfactual Optimization For Personalized Clinical Decision-Making

Chronic diseases have a significant impact on global mortality rates and healthcare costs. Notably, machine learning-based clinical assessment tools are becoming increasingly popular for informing treatment targets for high-risk patients with chronic diseases. However, using these tools alone, it is challenging to identify personalized treatment targets that lower the risks of adverse outcomes to a … Read more

Further Development in Convex Conic Reformulation of Geometric Nonconvex Conic Optimization Problems

\(\) A geometric nonconvex conic optimization problem (COP) was recently proposed by Kim, Kojima and Toh asa unified framework for convex conic reformulation of a class of quadratic optimization problems and polynomial optimization problems. The nonconvex COP minimizes a linear function over the intersection of a nonconvex cone K, a convex subcone J of the … Read more

Distributions and Bootstrap for Data-based Stochastic Programming

In the context of optimization under uncertainty, we consider various combinations of distribution estimation and resampling (bootstrap and bagging) for obtaining samples used to acquire a solution and for computing a confidence interval for an optimality gap. This paper makes three experimental contributions to on-going research in data driven stochastic programming: a) most of the … Read more

Expected decrease for derivative-free algorithms using random subspaces

Derivative-free algorithms seek the minimum of a given function based only on function values queried at appropriate points. Although these methods are widely used in practice, their performance is known to worsen as the problem dimension increases. Recent advances in developing randomized derivative-free techniques have tackled this issue by working in low-dimensional subspaces that are … Read more