Efficient Solution of Maximum-Entropy Sampling Problems

We consider a new approach for the maximum-entropy sampling problem (MESP) that is based on bounds obtained by maximizing a function of the form ldet M(x) over linear constraints, where M(x)is linear in the n-vector x. These bounds can be computed very efficiently and are superior to all previously known bounds for MESP on most … Read more

On positive duality gaps in semidefinite programming

We study semidefinite programs (SDPs) with positive duality gaps, i.e., different optimal values in the primal and dual problems. the primal and dual problems differ. These SDPs are considered extremely pathological, they are often unsolvable, and they also serve as models of more general pathological convex programs. We first fully characterize two variable SDPs with … Read more

On the Complexity of Detecting Convexity over a Box

It has recently been shown that the problem of testing global convexity of polynomials of degree four is {strongly} NP-hard, answering an open question of N.Z. Shor. This result is minimal in the degree of the polynomial when global convexity is of concern. In a number of applications however, one is interested in testing convexity … Read more

The Proximal Alternating Minimization Algorithm for two-block separable convex optimization problems with linear constraints

The Alternating Minimization Algorithm (AMA) has been proposed by Tseng to solve convex programming problems with two-block separable linear constraints and objectives, whereby (at least) one of the components of the latter is assumed to be strongly convex. The fact that one of the subproblems to be solved within the iteration process of AMA does … Read more

Distributionally Robust Inverse Covariance Estimation: The Wasserstein Shrinkage Estimator

We introduce a distributionally robust maximum likelihood estimation model with a Wasserstein ambiguity set to infer the inverse covariance matrix of a p-dimensional Gaussian random vector from n independent samples. The proposed model minimizes the worst case (maximum) of Stein’s loss across all normal reference distributions within a prescribed Wasserstein distance from the normal distribution … Read more

A family of spectral gradient methods for optimization

We propose a family of spectral gradient methods, whose stepsize is determined by a convex combination of the short Barzilai-Borwein (BB) stepsize and the long BB stepsize. It is shown that each member of the family shares certain quasi-Newton property in the sense of least squares. The family also includes some other gradient methods as … Read more

Robust-to-Dynamics Optimization

A robust-to-dynamics optimization (RDO) problem} is an optimization problem specified by two pieces of input: (i) a mathematical program (an objective function $f:\mathbb{R}^n\rightarrow\mathbb{R}$ and a feasible set $\Omega\subseteq\mathbb{R}^n$), and (ii) a dynamical system (a map $g:\mathbb{R}^n\rightarrow\mathbb{R}^n$). Its goal is to minimize $f$ over the set $\mathcal{S}\subseteq\Omega$ of initial conditions that forever remain in $\Omega$ under … Read more

Golden Ratio Algorithms for Variational Inequalities

The paper presents a fully explicit algorithm for monotone variational inequalities. The method uses variable stepsizes that are computed using two previous iterates as an approximation of the local Lipschitz constant without running a linesearch. Thus, each iteration of the method requires only one evaluation of a monotone operator $F$ and a proximal mapping $g$. … Read more

Lectures on Parametric Optimization: An Introduction

The report aims to provide an overview over results from Parametric Optimization which could be called classical results on the subject. Parametric Optimization considers optimization problems depending on a parameter and describes how the feasible set, the value function, and the local or global minimizers of the program depend on changes in the parameter. After … Read more

An algorithm to compute the Hoffman constant of a system of linear constraints

We propose a combinatorial algorithm to compute the Hoffman constant of a system of linear equations and inequalities. The algorithm is based on a characterization of the Hoffman constant as the largest of a finite canonical collection of easy-to-compute Hoffman constants. Our algorithm and characterization extend to the more general context where some of the … Read more