Robust System Identification: Finite-sample Guarantees and Connection to Regularization

We address the problem of identifying a stable linear time-invariant system from a single sample trajectory. The least squares estimate (LSE) is a commonly used algorithm for this purpose. However, LSE may exhibit poor identification errors when the number of samples is small. To mitigate the issue, we introduce the robust LSE, which integrates robust … Read more

Nonconvex optimization problems involving the Euclidean norm: Challenges, progress, and opportunities

The field of global optimization has advanced significantly over the past three decades. Yet, the solution of even small instances of many nonconvex optimization problems involving the Euclidean norm to global optimality remains beyond the reach of modern global optimization methods. These problems include numerous well-known and high-impact open research questions from a diverse collection … Read more

Tighter yet more tractable relaxations and nontrivial instance generation for sparse standard quadratic optimization

The Standard Quadratic optimization Problem (StQP), arguably the simplest among all classes of NP-hard optimization problems, consists of extremizing a quadratic form (the simplest nonlinear polynomial) over the standard simplex (the simplest polytope/compact feasible set). As a problem class, StQPs may be nonconvex with an exponential number of inefficient local solutions. StQPs arise in a … Read more

Efficient Project Scheduling with Autonomous Learning Opportunities

We consider novel project scheduling problems in which the experience gained from completing selected activities can be used to accelerate subsequent activities. Given a set of potential learning opportunities, our model aims to identify the opportunities that result in a maximum reduction of the project makespan when scheduled in sequence. Accounting for the impact of … Read more

Exploiting cone approximations in an augmented Lagrangian method for conic optimization

We propose an algorithm for general nonlinear conic programming which does not require the knowledge of the full cone, but rather a simpler, more tractable, approximation of it. We prove that the algorithm satisfies a strong global convergence property in the sense that it generates a strong sequential optimality condition. In particular, a KKT point … Read more

A Geometric Unification of Distributionally Robust Covariance Estimators: Shrinking the Spectrum by Inflating the Ambiguity Set

The state-of-the-art methods for estimating high-dimensional covariance matrices all shrink the eigenvalues of the sample covariance matrix towards a data-insensitive shrinkage target. The underlying shrinkage transformation is either chosen heuristically – without compelling theoretical justification – or optimally in view of restrictive distributional assumptions. In this paper, we propose a principled approach to construct covariance … Read more

On the integrality gap of the Complete Metric Steiner Tree Problem via a novel formulation

In this work, we compute the lower bound of the integrality gap of the Metric Steiner Tree Problem (MSTP) on a graph for some small values of number of nodes and terminals. After debating about some limitations of the most used formulation for the Steiner Tree Problem, namely the Bidirected Cut Formulation, we introduce a … Read more

On Necessary Optimality Conditions for Sets of Points in Multiobjective Optimization

Taking inspiration from what is commonly done in single-objective optimization, most local algorithms proposed for multiobjective optimization extend the classical iterative scalar methods and produce sequences of points able to converge to single efficient points. Recently, a growing number of local algorithms that build sequences of sets has been devised, following the real nature of … Read more

Statistical and Computational Guarantees of Kernel Max-Sliced Wasserstein Distances

Optimal transport has been very successful for various machine learning tasks; however, it is known to suffer from the curse of dimensionality. Hence, dimensionality reduction is desirable when applied to high-dimensional data with low-dimensional structures. The kernel max-sliced (KMS) Wasserstein distance is developed for this purpose by finding an optimal nonlinear mapping that reduces data … Read more