Maximum Likelihood Probability Measures over Sets and Applications to Data-Driven Optimization

Motivated by data-driven approaches to sequential decision-making under uncertainty, we study maximum likelihood estimation of a distribution over a general measurable space when, unlike traditional setups, realizations of the underlying uncertainty are not directly observable but instead are known to lie within observable sets. While extant work studied the special cases when the observed sets … Read more

Disjunctive Branch-and-Bound for Certifiably Optimal Low-Rank Matrix Completion

Low-rank matrix completion consists of computing a matrix of minimal complexity that recovers a given set of observations as accurately as possible. Unfortunately, existing methods for matrix completion are heuristics that, while highly scalable and often identifying high-quality solutions, do not provide an instance-wise certificate of optimality. We reexamine matrix completion with an optimality-oriented eye. … Read more

Learning in Inverse Optimization: Incenter Cost, Augmented Suboptimality Loss, and Algorithms

In Inverse Optimization (IO), an expert agent solves an optimization problem parametric in an exogenous signal. From a learning perspective, the goal is to learn the expert’s cost function given a dataset of signals and corresponding optimal actions. Motivated by the geometry of the IO set of consistent cost vectors, we introduce the “incenter” concept, … Read more

On the Partial Convexification of the Low-Rank Spectral Optimization: Rank Bounds and Algorithms

A Low-rank Spectral Optimization Problem (LSOP) minimizes a linear objective subject to multiple two-sided linear matrix inequalities intersected with a low-rank and spectral constrained domain set. Although solving LSOP is, in general, NP-hard, its partial convexification (i.e., replacing the domain set by its convex hull) termed “LSOP-R”, is often tractable and yields a high-quality solution. … Read more

On the Number of Pivots of Dantzig’s Simplex Methods for Linear and Convex Quadratic Programs

Refining and extending works by Ye and Kitahara-Mizuno, this paper presents new results on the number of pivots of simplex-type methods for solving linear programs of the Leontief kind, certain linear complementarity problems of the P kind, and nonnegative constrained convex quadratic programs. Our results contribute to the further understanding of the complexity and efficiency … Read more

On the longest chain of faces of the completely positive and copositive cones

We consider a wide class of closed convex cones K in the space of real n*n symmetric matrices and establish the existence of a chain of faces of K, the length of which is maximized at n(n+1)/2 + 1. Examples of such cones include, but are not limited to, the completely positive and the copositive … Read more

Modeling risk for CVaR-based decisions in risk aggregation

Title Modeling risk for CVaR-based decisions in risk aggregation. Abstract Measuring the risk aggregation is an important exercise for any risk bearing carrier. It is not restricted to evaluation of the known portfolio risk position only, and could include complying with regulatory requirements, diversification, etc. The main difficulty of risk aggregation is creating an underlying … Read more

Worst-Case Conditional Value at Risk for Asset Liability Management: A Novel Framework for General Loss Functions

Asset-liability management (ALM) is a challenging task faced by pension funds due to the uncertain nature of future asset returns and interest rates. To address this challenge, this paper presents a new mathematical model that uses aWorst-case Conditional Value-at-Risk (WCVaR) constraint to ensure that the funding ratio remains above a regulator-mandated threshold with a high … Read more

Optimized Dimensionality Reduction for Moment-based Distributionally Robust Optimization

Moment-based distributionally robust optimization (DRO) provides an optimization framework to integrate statistical information with traditional optimization approaches. Under this framework, one assumes that the underlying joint distribution of random parameters runs in a distributional ambiguity set constructed by moment information and makes decisions against the worst-case distribution within the set. Although most moment-based DRO problems … Read more

The complexity of first-order optimization methods from a metric perspective

A central tool for understanding first-order optimization algorithms is the Kurdyka-Lojasiewicz inequality. Standard approaches to such methods rely crucially on this inequality to leverage sufficient decrease conditions involving gradients or subgradients. However, the KL property fundamentally concerns not subgradients but rather “slope”, a purely metric notion. By highlighting this view, and avoiding any use of … Read more