Chance-constrained optimization via randomization: feasibility and optimality

In this paper we study the link between a semi-infinite chance-constrained optimization problem and its randomized version, i.e. the problem obtained by sampling a finite number of its constraints. Extending previous results on the feasibility of randomized convex programs, we establish here the feasibility of the solution obtained after the elimination of a portion of … Read more

On Verifiable Sufficient Conditions for Sparse Signal Recovery via L1 Minimization

We propose novel necessary and sufficient conditions for a sensing matrix to be “s-good” — to allow for exact L1-recovery of sparse signals with s nonzero entries when no measurement noise is present. Then we express the error bounds for imperfect L1-recovery (nonzero measurement noise, nearly s-sparse signal, near-optimal solution of the optimization problem yielding … Read more

Gradient based method for cone programming with application to large-scale compressed sensing

In this paper, we study a gradient based method for general cone programming (CP) problems. In particular, we first consider four natural primal-dual convex smooth minimization reformulations for them, and then discuss a variant of Nesterov’s smooth (VNS) method recently proposed by Tseng [30] for solving these reformulations. The associated worst-case major arithmetic operations costs … Read more

Identification and Elimination of Interior Points for the Minimum Enclosing Ball Problem

Given $\cA := \{a^1,\ldots,a^m\} \subset \R^n$, we consider the problem of reducing the input set for the computation of the minimum enclosing ball of $\cA$. In this note, given an approximate solution to the minimum enclosing ball problem, we propose a simple procedure to identify and eliminate points in $\cA$ that are guaranteed to lie … Read more

Optimal steepest descent algorithms for unconstrained convex problems: fine tuning Nesterov’s method

We modify the first order algorithm for convex programming proposed by Nesterov. The resulting algorithm keeps the optimal complexity obtained by Nesterov with no need of a known Lipschitz constant for the gradient, and performs better in practically all examples in a set of test problems. Citation Technical Report, Federal University of Santa Catarina, 2008. … Read more

Strong Duality and Minimal Representations for Cone Optimization

The elegant results for strong duality and strict complementarity for linear programming, \LP, can fail for cone programming over nonpolyhedral cones. One can have: unattained optimal values; nonzero duality gaps; and no primal-dual optimal pair that satisfies strict complementarity. This failure is tied to the nonclosure of sums of nonpolyhedral closed cones. We take a … Read more

Representation of nonnegative convex polynomials

We provide a specific representation of convex polynomials nonnegative on a convex (not necessarily compact) basic closed semi-algebraic set $K$ of $R^n$. Namely, they belong to a specific subset of the quadratic module generated by the concave polynomials that define $K$. Citation To appear in Archiv der Mathematik Article Download View Representation of nonnegative convex … Read more

On Non-Convex Quadratic Programming with Box Constraints

Non-Convex Quadratic Programming with Box Constraints is a fundamental NP-hard global optimisation problem. Recently, some authors have studied a certain family of convex sets associated with this problem. We prove several fundamental results concerned with these convex sets: we determine their dimension, characterise their extreme points and vertices, show their invariance under certain affine transformations, … Read more

T-algebras and linear optimization over symmetric cones

Euclidean Jordan-algebra is a commonly used tool in designing interior point algorithms for symmetric cone programs. T-algebra, on the other hand, has rarely been used in symmetric cone programming. In this paper, we use both algebraic characterizations of symmetric cones to extend the target-following framework of linear programming to symmetric cone programming. Within this framework, … Read more

An Analysis of Weighted Least Squares Method and Layered Least Squares Method with the Basis Block Lower Triangular Matrix Form

In this paper, we analyze the limiting behavior of the weighted least squares problem $\min_{x\in\Re^n}\sum_{i=1}^p\|D_i(A_ix-b_i)\|^2$, where each $D_i$ is a positive definite diagonal matrix. We consider the situation where the magnitude of the weights are drastically different block-wisely so that $\max(D_1)\geq\min(D_1) \gg \max(D_2) \geq \min(D_2) \gg \max(D_3) \geq \ldots \gg \max(D_{p-1}) \geq \min(D_{p-1}) \gg \max(D_p)$. … Read more