H2-optimal model reduction of MIMO systems

We consider the problem of approximating a $p\times m$ rational transfer function $H(s)$ of high degree by another $p\times m$ rational transfer function $\hat{H}(s)$ of much smaller degree. We derive the gradients of the $H_2$-norm of the approximation error and show how stationary points can be described via tangential interpolation. CitationTechnical report UCL-INMA-2007.034, Department of … Read more

Local convergence for alternating and averaged nonconvex projections

The idea of a finite collection of closed sets having “strongly regular intersection” at a given point is crucial in variational analysis. We show that this central theoretical tool also has striking algorithmic consequences. Specifically, we consider the case of two sets, one of which we assume to be suitably “regular” (special cases being convex … Read more

Semidefinite Programming for Gradient and Hessian Computation in Maximum Entropy Estimation

We consider the classical problem of estimating a density on $[0,1]$ via some maximum entropy criterion. For solving this convex optimization problem with algorithms using first-order or second-order methods, at each iteration one has to compute (or at least approximate) moments of some measure with a density on $[0,1]$, to obtain gradient and Hessian data. … Read more

Nonparametric Estimation via Convex Programming

In the paper, we focus primarily on the problem of recovering a linear form g’*x of unknown “signal” x known to belong to a given convex compact set X in R^n from N independent realizations of a random variable taking values in a finite set, the distribution p of the variable being affinely parameterized by … Read more

Robust Nonconvex Optimization for Simulation-based Problems

In engineering design, an optimized solution often turns out to be suboptimal, when implementation errors are encountered. While the theory of robust convex optimization has taken significant strides over the past decade, all approaches fail if the underlying cost function is not explicitly given; it is even worse if the cost function is nonconvex. In … Read more

A General Heuristic Method for Joint Chance-Constrained Stochastic Programs with Discretely Distributed Parameters

We present a general metaheuristic for joint chance-constrained stochastic programs with discretely distributed parameters. We give a reformulation of the problem that allows us to define a finite solution space. We then formulate a novel neighborhood for the problem and give methods for efficiently searching this neighborhood for solutions that are likely to be improving. … Read more

Exploiting separability in large-scale linear support vector machine training

Linear support vector machine training can be represented as a large quadratic program. We present an efficient and numerically stable algorithm for this problem using interior point methods, which requires only O(n) operations per iteration. Through exploiting the separability of the Hessian, we provide a unified approach, from an optimization perspective, to 1-norm classification, 2-norm … Read more

Iterative Minimization Schemes for Solving the Single Source Localization Problem

We consider the problem of locating a single radiating source from several noisy measurements using a maximum likelihood (ML) criteria. The resulting optimization problem is nonconvex and nonsmooth and thus finding its global solution is in principal a hard task. Exploiting the special structure of the objective function, we introduce and analyze two iterative schemes … Read more

Nonlinear Matroid Optimization and Experimental Design

We study the problem of optimizing nonlinear objective functions over matroids presented by oracles or explicitly. Such functions can be interpreted as the balancing of multi-criteria optimization. We provide a combinatorial polynomial time algorithm for arbitrary oracle-presented matroids, that makes repeated use of matroid intersection, and an algebraic algorithm for vectorial matroids. Our work is … Read more

A Fast Algorithm For Image Deblurring with Total Variation Regularization

We propose and test a simple algorithmic framework for recovering images from blurry and noisy observations based on total variation (TV) regularization when a blurring point-spread function is given. Using a splitting technique, we construct an iterative procedure of alternately solving a pair of easy subproblems associated with an increasing sequence of penalty parameter values. … Read more