Optimization over Structured Subsets of Positive Semidefinite Matrices via Column Generation

We develop algorithms for inner approximating the cone of positive semidefinite matrices via linear programming and second order cone programming. Starting with an initial linear algebraic approximation suggested recently by Ahmadi and Majumdar, we describe an iterative process through which our approximation is improved at every step. This is done using ideas from column generation … Read more

ADMM for the SDP relaxation of the QAP

The semidefinite programming (SDP) relaxation has proven to be extremely strong for many hard discrete optimization problems. This is in particular true for the quadratic assignment problem (QAP), arguably one of the hardest NP-hard discrete optimization problems. There are several difficulties that arise in efficiently solving the SDP relaxation, e.g., increased dimension; inefficiency of the … Read more

Optimality conditions for nonlinear semidefinite programming via squared slack variables

In this work, we derive second-order optimality conditions for nonlinear semidefinite programming (NSDP) problems, by reformulating it as an ordinary nonlinear programming problem using squared slack variables. We first consider the correspondence between Karush-Kuhn-Tucker points and regularity conditions for the general NSDP and its reformulation via slack variables. Then, we obtain a pair of “no-gap” … Read more

The use of squared slack variables in nonlinear second-order cone programming

In traditional nonlinear programming, the technique of converting a problem with inequality constraints into a problem containing only equality constraints, by the addition of squared slack variables, is well-known. Unfortunately, it is considered to be an avoided technique in the optimization community, since the advantages usually do not compensate for the disadvantages, like the increase … Read more

Perturbation of error bounds

Our aim in the current article is to extend the developments in Kruger, Ngai & Th\’era, SIAM J. Optim. 20(6), 3280–3296 (2010) and, more precisely, to characterize, in the Banach space setting, the stability of the local and global error bound property of inequalities determined by proper lower semicontinuous under data perturbations. We propose new … Read more

Strengthening the SDP Relaxation of AC Power Flows with Convex Envelopes, Bound Tightening, and Lifted Nonlinear Cuts

This paper considers state-of-the-art convex relaxations for the AC power flow equations and introduces new valid cuts based on convex envelopes and lifted nonlinear constraints. These valid linear inequalities strengthen existing semidefinite and quadratic programming relaxations and dominate existing cuts proposed in the litterature. Together with model intersections and bound tightening, the new linear cuts … Read more

Convex Hull Characterizations of Lexicographic Orderings

Given a p-dimensional nonnegative, integral vector α, this paper characterizes the convex hull of the set S of nonnegative, integral vectors x that is lexicographically less than or equal to α. To obtain a finite number of elements in S, the vectors x are restricted to be component-wise upper-bounded by an integral vector u. We … Read more

Data-Driven Inverse Optimization with Imperfect Information

In data-driven inverse optimization an observer aims to learn the preferences of an agent who solves a parametric optimization problem depending on an exogenous signal. Thus, the observer seeks the agent’s objective function that best explains a historical sequence of signals and corresponding optimal actions. We focus here on situations where the observer has imperfect … Read more

SMART: The Stochastic Monotone Aggregated Root-Finding Algorithm

We introduce the Stochastic Monotone Aggregated Root-Finding (SMART) algorithm, a new randomized operator-splitting scheme for finding roots of finite sums of operators. These algorithms are similar to the growing class of incremental aggregated gradient algorithms, which minimize finite sums of functions; the difference is that we replace gradients of functions with black-boxes called operators, which … Read more

Solutions of a constrained Hermitian matrix-valued function optimization problem with applications

Let $f(X) =\left( XC + D\right)M\left(XC + D \right)^{*} – G$ be a given nonlinear Hermitian matrix-valued function with $M = M^*$ and $G = G^*$, and assume that the variable matrix $X$ satisfies the consistent linear matrix equation $XA = B$. This paper shows how to characterize the semi-definiteness of $f(X)$ subject to all … Read more