Mean squared error minimization for inverse moment problems

We consider the problem of approximating the unknown density $u\in L^2(\Omega,\lambda)$ of a measure $\mu$ on $\Omega\subset\R^n$, absolutely continuous with respect to some given reference measure $\lambda$, from the only knowledge of finitely many moments of $\mu$. Given $d\in\N$ and moments of order $d$, we provide a polynomial $p_d$ which minimizes the mean square error … Read more

Solving Security Constrained Optimal Power Flow Problems by a Structure Exploiting Interior Point Method

The aim of this paper is to demonstrate a new approach to solve the linearized (n-1) security constrained optimal power flow (SCOPF) problem by a structure exploiting interior point solver. Firstly, we present a reformulation of the linearized SCOPF model, in which most matrices that need to be factorized are constant. Hence, most factorizations and … Read more

Erratum to: On the set-semidefinite representation of nonconvex quadratic programs over arbitrary feasible sets” [Optim. Letters, 2012]

In this paper, an erratum is provided to the article “\emph{On the set-semidefinite representation of nonconvex quadratic programs over arbitrary feasible sets}”, published in Optim.\ Letters, 2012. Due to precise observation of the first author, it has been found that the proof of Lemma 9 has a nontrivial gap, and consequently the main result (Theorem … Read more

Approximation of Matrix Rank Function and Its Application to Matrix Rank Minimization

Matrix rank minimization problem is widely applicable in many fields such as control, signal processing and system identification. However, the problem is in general NP-hard, and it is computationally hard to solve directly in practice. In this paper, we provide a new kind of approximation functions for the rank of matrix, and the corresponding approximation … Read more

Computational aspects of risk-averse optimisation in two-stage stochastic models

In this paper we argue for aggregated models in decomposition schemes for two-stage stochastic programming problems. We observe that analogous schemes proved effective for single-stage risk-averse problems, and for general linear programming problems. A major drawback of the aggregated approach for two-stage problems is that an aggregated master problem can not contain all the information … Read more

On Families of Quadratic Surfaces Having Fixed Intersections with Two Hyperplanes

We investigate families of quadrics that have fixed intersections with two given hyper-planes. The cases when the two hyperplanes are parallel and when they are nonparallel are discussed. We show that these families can be described with only one parameter. In particular we show how the quadrics are transformed as the parameter changes. This research … Read more

Convex computation of the region of attraction of polynomial control systems

We address the long-standing problem of computing the region of attraction (ROA) of a target set (typically a neighborhood of an equilibrium point) of a controlled nonlinear system with polynomial dynamics and semialgebraic state and input constraints. We show that the ROA can be computed by solving a convex linear programming (LP) problem over the … Read more

A method for weighted projections to the positive definite cone

We study the numerical solution of the problem $\min_{X \ge 0} \|BX-c\|2$, where $X$ is a symmetric square matrix, and $B$ a linear operator, such that $B^*B$ is invertible. With $\rho$ the desired fractional duality gap, we prove $O(\sqrt{m}\log\rho^{-1})$ iteration complexity for a simple primal-dual interior point method directly based on those for linear programs … Read more

A first-order block-decomposition method for solving two-easy-block structured semidefinite programs

In this paper, we consider a first-order block-decomposition method for minimizing the sum of a convex differentiable function with Lipschitz continuous gradient, and two other proper closed convex (possibly, nonsmooth) functions with easily computable resolvents. The method presented contains two important ingredients from a computational point of view, namely: an adaptive choice of stepsize for … Read more

An acceleration procedure for optimal first-order methods

We introduce in this paper an optimal first-order method that allows an easy and cheap evaluation of the local Lipschitz constant of the objective’s gradient. This constant must ideally be chosen at every iteration as small as possible, while serving in an indispensable upper bound for the value of the objective function. In the previously … Read more