Self-Concordance and Matrix Monotonicity with Applications to Quantum Entanglement Problems

Let $V$ be an Euclidean Jordan algebra and $\Omega$ be a cone of invertible squares in $V$. Suppose that $g:\mathbb{R}_{+} \to \mathbb{R}$ is a matrix monotone function on the positive semiaxis which naturally induces a function $\tilde{g}: \Omega \to V$. We show that $-\tilde{g}$ is compatible (in the sense of Nesterov-Nemirovski) with the standard self-concordant … Read more

Burer-Monteiro guarantees for general semidefinite programs

Consider a semidefinite program (SDP) involving an $n\times n$ positive semidefinite matrix $X$. The Burer-Monteiro method consists in solving a nonconvex program in $Y$, where $Y$ is an $n\times p$ matrix such that $X = Y Y^T$. Despite nonconvexity, Boumal et al. showed that the method provably solves generic equality-constrained SDP’s when $p > \sqrt{2m}$, … Read more

Knapsack Polytopes – A Survey

The 0/1 knapsack polytope is the convex hull of all 0/1 vectors that satisfy a given single linear inequality with non-negative coefficients. This paper provides a comprehensive overview of knapsack polytopes. We discuss basic polyhedral properties, (lifted) cover and other valid inequalities, cases for which complete linear descriptions are known, geometric properties for small dimensions, … Read more

RaBVItG:An Algorithm for Solving a Class of Multi-Players Feedback Nash Differential Games

In this work, we introduce a novel numerical algorithm, called RaBVItG (Radial Basis Value Iteration Game) to approximate feedback-Nash equilibria for deterministic differential games. More precisely, RaBVItG is an algorithm based on value iteration schemes in a meshfree context. It is used to approximate optimal feedback Nash policies for multi-players, trying to tackle the dimensionality … Read more

Lower Bounds for the Bandwidth Problem

The Bandwidth Problem asks for a simultaneous permutation of the rows and columns of the adjacency matrix of a graph such that all nonzero entries are as close as possible to the main diagonal. This work focuses on investigating novel approaches to obtain lower bounds for the bandwidth problem. In particular, we use vertex partitions … Read more

Convex-Concave Backtracking for Inertial Bregman Proximal Gradient Algorithms in Non-Convex Optimization

Backtracking line-search is an old yet powerful strategy for finding better step size to be used in proximal gradient algorithms. The main principle is to locally find a simple convex upper bound of the objective function, which in turn controls the step size that is used. In case of inertial proximal gradient algorithms, the situation … Read more

Normal Approximation for Stochastic Gradient Descent via Non-Asymptotic Rates of Martingale CLT

We provide non-asymptotic convergence rates of the Polyak-Ruppert averaged stochastic gradient descent (SGD) to a normal random vector for a class of twice-differentiable test functions. A crucial intermediate step is proving a non-asymptotic martingale central limit theorem (CLT), i.e., establishing the rates of convergence of a multivariate martingale difference sequence to a normal random vector, … Read more

Distributionally robust optimization with multiple time scales: valuation of a thermal power plant

The valuation of a real option is preferably done with the inclusion of uncertainties in the model, since the value depends on future costs and revenues, which are not perfectly known today. The usual value of the option is defined as the maximal expected (discounted) profit one may achieve under optimal management of the operation. … Read more

Noisy Euclidean Distance Matrix Completion with a Single Missing Node

We present several solution techniques for the noisy single source localization problem, i.e.,~the Euclidean distance matrix completion problem with a single missing node to locate under noisy data. For the case that the sensor locations are fixed, we show that this problem is implicitly convex, and we provide a purification algorithm along with the SDP … Read more

A Log-Barrier Newton-CG Method for Bound Constrained Optimization with Complexity Guarantees

We describe an algorithm based on a logarithmic barrier function, Newton’s method, and linear conjugate gradients, that obtains an approximate minimizer of a smooth function over the nonnegative orthant. We develop a bound on the complexity of the approach, stated in terms of the required accuracy and the cost of a single gradient evaluation of … Read more