Partial smoothness of the numerical radius at matrices whose fields of values are disks

Solutions to optimization problems involving the numerical radius often belong to a special class: the set of matrices having field of values a disk centered at the origin. After illustrating this phenomenon with some examples, we illuminate it by studying matrices around which this set of “disk matrices” is a manifold with respect to which … Read more

Weak convergence of an extended splitting method for monotone inclusions

In this article, we consider the problem of finding zeros of two-operator monotone inclusions in real Hilbert spaces, and the second operator has been linearly composed. We suggest an extended splitting method: At each iteration, it mainly solves one resolvent for each operator, respectively. For these two resolvents, the involved two scaling factors can be … Read more

A Comparison of Nonsmooth, Nonconvex, Constrained Optimization Solvers for the Design of Time-Delay Compensators

We present a detailed set of performance comparisons of two state-of-the-art solvers for the application of designing time-delay compensators, an important problem in the field of robust control. Formulating such robust control mechanics as constrained optimization problems often involves objective and constraint functions that are both nonconvex and nonsmooth, both of which present significant challenges … Read more

Successive Quadratic Upper-Bounding for Discrete Mean-Risk Minimization and Network Interdiction

The advances in conic optimization have led to its increased utilization for modeling data uncertainty. In particular, conic mean-risk optimization gained prominence in probabilistic and robust optimization. Whereas the corresponding conic models are solved efficiently over convex sets, their discrete counterparts are intractable. In this paper, we give a highly effective successive quadratic upper-bounding procedure … Read more

Submodularity in conic quadratic mixed 0-1 optimization

We describe strong convex valid inequalities for conic quadratic mixed 0-1 optimization. These inequalities can be utilized for solving numerous practical nonlinear discrete optimization problems from value-at-risk minimization to queueing system design, from robust interdiction to assortment optimization through appropriate conic quadratic mixed 0-1 relaxations. The inequalities exploit the submodularity of the binary restrictions and … Read more

Consistency Bounds and Support Recovery of D-stationary Solutions of Sparse Sample Average Approximations

This paper studies properties of the d(irectional)-stationary solutions of sparse sample average approximation (SAA) problems involving difference-of-convex (dc) sparsity functions under a deterministic setting. Such properties are investigated with respect to a vector which satisfies a verifiable assumption to relate the empirical SAA problem to the expectation minimization problem defined by an underlying data distribution. … Read more

On semi-infinite systems of convex polynomial inequalities and polynomial optimization problems

We consider the semi-infinite system of polynomial inequalities of the form \[ \mathbf{K}:=\{x\in\mathbb{R}^m\mid p(x,y)\ge 0,\ \ \forall y\in S\subseteq\mathbb{R}^n\}, \] where $p(X,Y)$ is a real polynomial in the variables $X$ and the parameters $Y$, the index set $S$ is a basic semialgebraic set in $\mathbb{R}^n$, $-p(X,y)$ is convex in $X$ for every $y\in S$. We … Read more

Escaping local minima with derivative-free methods: a numerical investigation

We apply a state-of-the-art, local derivative-free solver, Py-BOBYQA, to global optimization problems, and propose an algorithmic improvement that is beneficial in this context. Our numerical findings are illustrated on a commonly-used test set of global optimization problems and associated noisy variants, and on hyperparameter tuning for a machine learning test set. As Py-BOBYQA is a … Read more

The convex hull of a quadratic constraint over a polytope

A quadratically constrained quadratic program (QCQP) is an optimization problem in which the objective function is a quadratic function and the feasible region is defined by quadratic constraints. Solving non-convex QCQP to global optimality is a well-known NP-hard problem and a traditional approach is to use convex relaxations and branch-and-bound algorithms. This paper makes a … Read more

Reinforcement Learning via Parametric Cost Function Approximation for Multistage Stochastic Programming

The most common approaches for solving stochastic resource allocation problems in the research literature is to either use value functions (“dynamic programming”) or scenario trees (“stochastic programming”) to approximate the impact of a decision now on the future. By contrast, common industry practice is to use a deterministic approximation of the future which is easier … Read more