On Verifiable Sufficient Conditions for Sparse Signal Recovery via L1 Minimization

We propose novel necessary and sufficient conditions for a sensing matrix to be “s-good” — to allow for exact L1-recovery of sparse signals with s nonzero entries when no measurement noise is present. Then we express the error bounds for imperfect L1-recovery (nonzero measurement noise, nearly s-sparse signal, near-optimal solution of the optimization problem yielding … Read more

Validation Analysis of Robust Stochastic Approximation Method

The main goal of this paper is to develop accuracy estimates for stochastic programming problems by employing robust stochastic approximation (SA) type algorithms. To this end we show that while running a Robust Mirror Descent Stochastic Approximation procedure one can compute, with a small additional effort, lower and upper statistical bounds for the optimal objective … Read more

Large Deviations of Vector-valued Martingales in 2-Smooth Normed Spaces

In this paper, we derive exponential bounds on probabilities of large deviations for “light tail” martingales taking values in finite-dimensional normed spaces. Our primary emphasis is on the case where the bounds are dimension-independent or nearly so. We demonstrate that this is the case when the norm on the space can be approximated, within an … Read more

Stochastic Approximation approach to Stochastic Programming

In this paper we consider optimization problems where the objective function is given in a form of the expectation. A basic difficulty of solving such stochastic optimization problems is that the involved multidimensional integrals (expectations) cannot be computed with high accuracy. The aim of this paper is to compare two computational approaches based on Monte … Read more

Nonparametric Estimation via Convex Programming

In the paper, we focus primarily on the problem of recovering a linear form g’*x of unknown “signal” x known to belong to a given convex compact set X in R^n from N independent realizations of a random variable taking values in a finite set, the distribution p of the variable being affinely parameterized by … Read more

Accuracy Certificates for Computational Problems with Convex Structure

The goal of the current paper is to introduce the notion of certificates which verify the accuracy of solutions of computational problems with convex structure; such problems include minimizing convex functions, variational inequalities corresponding to monotone operators, computing saddle points of convex-concave functions and solving convex Nash equilibrium problems. We demonstrate how the implementation of … Read more

Self-Concordant Barriers for Convex Approximations of Structured Convex Sets

We show how to approximate the feasible region of structured convex optimization problems by a family of convex sets with explicitly given and efficient (if the accuracy of the approximation is moderate) self-concordant barriers. This approach extends the reach of the modern theory of interior-point methods, and lays the foundation for new ways to treat … Read more

On Safe Tractable Approximations of Chance Constrained Linear Matrix Inequalities

In the paper, we consider the chance constrained version $$ \Prob\{A_0[x]+\sum_{i=1}^d\zeta_i A_i[x]\succeq0\}\geq1-\epsilon, $$ of an affinely perturbed Linear Matrix Inequality constraint; here $A_i[x]$ are symmetric matrices affinely depending on the decision vector $x$, and $\zeta_1,…,\zeta_d$ are independent of each other random perturbations with light tail distributions (e.g., bounded or Gaussian). Constraints of this type, playing … Read more

Selected Topics in Robust Convex Optimization

Robust Optimization is a rapidly developing methodology for handling optimization problems affected by non-stochastic “uncertain-but-bounded” data perturbations. In this paper, we overview several selected topics in this popular area, specifically, (1) recent extensions of the basic concept of {\sl robust counterpart} of an optimization problem with uncertain data, (2) tractability of robust counterparts, (3) links … Read more

On complexity of Shmoys – Swamy class of two-stage linear stochastic programming problems

We consider a class of two-stage linear stochastic programming problems, introduced by Shmoys and Swamy (2004), motivated by a relaxation of a stochastic set cover problem. We show that the sample size required to solve this problem by the sample average approximation (SAA) method with a relative accuracy $\kappa>0$ and confidence $1-\alpha$ is polynomial in … Read more