Maximum-Entropy Sampling and the Boolean Quadric Polytope

We consider a bound for the maximum-entropy sampling problem (MESP) that is based on solving a max-det problem over a relaxation of the Boolean Quadric Polytope (BQP). This approach to MESP was first suggested by Christoph Helmberg over 15 years ago, but has apparently never been further elaborated or computationally investigated. We find that the … Read more

Approximate Positively Correlated Distributions and Approximation Algorithms for D-optimal Design

Experimental design is a classical problem in statistics and has also found new applications in machine learning. In the experimental design problem, the aim is to estimate an unknown vector x in m-dimensions from linear measurements where a Gaussian noise is introduced in each measurement. The goal is to pick k out of the given … Read more

Forecasting Solar Flares using magnetogram-based predictors and Machine Learning

We propose a forecasting approach for solar flares based on data from Solar Cycle 24, taken by the Helioseismic and Magnetic Imager (HMI) on board the Solar Dynamics Observatory (SDO) mission. In particular, we use the Space-weather HMI Active Region Patches (SHARP) product that facilitates cut-out magnetograms of solar active regions (AR) in the Sun … Read more

Model and Discretization Error Adaptivity within Stationary Gas Transport Optimization

The minimization of operation costs for natural gas transport networks is studied. Based on a recently developed model hierarchy ranging from detailed models of instationary partial differential equations with temperature dependence to highly simplified algebraic equations, modeling and discretization error estimates are presented to control the overall error in an optimization method for stationary and … Read more

On the local stability of semidefinite relaxations

In this paper we consider a parametric family of polynomial optimization problems over algebraic sets. Although these problems are typically nonconvex, tractable convex relaxations via semidefinite programming (SDP) have been proposed. Often times in applications there is a natural value of the parameters for which the relaxation will solve the problem exactly. We study conditions … Read more

Sparse principal component analysis and its l1-relaxation

Principal component analysis (PCA) is one of the most widely used dimensionality reduction methods in scientific data analysis. In many applications, for additional interpretability, it is desirable for the factor loadings to be sparse, that is, we solve PCA with an additional cardinality (l0) constraint. The resulting optimization problem is called the sparse principal component … Read more

Efficient Convex Optimization for Linear MPC

Model predictive control (MPC) formulations with linear dynamics and quadratic objectives can be solved efficiently by using a primal-dual interior-point framework, with complexity proportional to the length of the horizon. An alternative, which is more able to exploit the similarity of the problems that are solved at each decision point of linear MPC, is to … Read more

Bootstrap Robust Prescriptive Analytics

We address the problem of prescribing an optimal decision in a framework where its cost depends on uncertain problem parameters $Y$ that need to be learned from data. Earlier work by Bertsimas and Kallus (2014) transforms classical machine learning methods that merely predict $Y$ from supervised training data $[(x_1, y_1), \dots, (x_n, y_n)]$ into prescriptive … Read more

Optimization of Stochastic Problems with Probability Functions via Differential Evolution

Chance constrained programming, quantile/Value-at-Risk (VaR) optimization and integral quantile / Conditional Value-at-Risk (CVaR) optimization problems as Stochastic Programming Problems with Probability Functions (SPP-PF) are one of the most widely studied optimization problems in recent years. As a rule real-life SPP-PF is nonsmooth nonconvex optimization problem with complex geometry of objective function. Moreover, often it cannot … Read more

Convergent Prediction-Correction-based ADMM for multi-block separable convex programming

The direct extension of the classic alternating direction method with multipliers (ADMMe) to the multi-block separable convex optimization problem is not necessarily convergent, though it often performs very well in practice. In order to preserve the numerical advantages of ADMMe and obtain convergence, many modified ADMM were proposed by correcting the output of ADMMe or … Read more