Strong Formulations and Algorithms for Regularized A-Optimal Design

\(\) We study the Regularized A-Optimal Design (RAOD) problem, which selects a subset of \(k\) experiments to minimize the inverse of the Fisher information matrix, regularized with a scaled identity matrix. RAOD has broad applications in Bayesian experimental design, sensor placement, and cold-start recommendation. We prove its NP-hardness via a reduction from the independent set problem. By leveraging convex envelope techniques, we propose a new convex integer programming formulation for RAOD, whose continuous relaxation dominates those of existing formulations. More importantly, we demonstrate that our continuous relaxation achieves bounded optimality gaps for all \(k\), whereas previous relaxations may suffer from unbounded gaps. This new formulation enables the development of an exact cutting-plane algorithm with superior efficiency, especially in high-dimensional and small-\(k\) scenarios. We also investigate scalable forward and backward greedy approximation algorithms for solving RAOD, each with provable performance guarantees for different \(k\) ranges. Finally, our numerical results on synthetic and real data confirm the efficacy of the proposed algorithms. We further showcase the practical effectiveness of RAOD by applying it to a real-world user cold-start recommendation problem.

Article

Download

View PDF