Constant Depth Decision Rules for multistage optimization under uncertainty

In this paper, we introduce a new class of decision rules, referred to as Constant Depth Decision Rules (CDDRs), for multistage optimization under linear constraints with uncertainty-affected right-hand sides. We consider two uncertainty classes: discrete uncertainties which can take at each stage at most a fixed number d of different values, and polytopic uncertainties which, … Read more

Non-asymptotic confidence bounds for the optimal value of a stochastic program

We discuss a general approach to building non-asymptotic confidence bounds for stochastic optimization problems. Our principal contribution is the observation that a Sample Average Approximation of a problem supplies upper and lower bounds for the optimal value of the problem which are essentially better than the quality of the corresponding optimal solutions. At the same … Read more

On Lower Complexity Bounds for Large-Scale Smooth Convex Optimization

In this note we present tight lower bounds on the information-based complexity of large-scale smooth convex minimization problems. We demonstrate, in particular, that the k-step Conditional Gradient (a.k.a. Frank-Wolfe) algorithm as applied to minimizing smooth convex functions over the n-dimensional box with n ≥ k is optimal, up to an O(ln n)-factor, in terms of … Read more

Solving large scale polynomial convex problems on \ell_1/nuclear norm balls by randomized first-order algorithms

One of the most attractive recent approaches to processing well-structured large-scale convex optimization problems is based on smooth convex-concave saddle point reformulation of the problem of interest and solving the resulting problem by a fast First Order saddle point method utilizing smoothness of the saddle point cost function. In this paper, we demonstrate that when … Read more

A randomized Mirror-Prox method for solving structured large-scale matrix saddle-point problems

In this paper, we derive a randomized version of the Mirror-Prox method for solving some structured matrix saddle-point problems, such as the maximal eigenvalue minimization problem. Deterministic first-order schemes, such as Nesterov’s Smoothing Techniques or standard Mirror-Prox methods, require the exact computation of a matrix exponential at every iteration, limiting the size of the problems … Read more

Robust Energy Cost Optimization of Water Distribution System with Uncertain Demand

A methodology, based on the concept of Affinely Adjustable Robust Optimization, for optimizing daily operation of pumping stations is proposed, which takes into account the fact that a water distribution system in reality is unavoidably affected by uncertainties. For operation control, the main source of uncertainty is the uncertainty in the demand. Traditional methods for … Read more

Accuracy guarantees for ℓ1-recovery

We discuss two new methods of recovery of sparse signals from noisy observation based on ℓ1- minimization. They are closely related to the well-known techniques such as Lasso and Dantzig Selector. However, these estimators come with efficiently verifiable guaranties of performance. By optimizing these bounds with respect to the method parameters we are able to … Read more

L1 Minimization via Randomized First Order Algorithms

In this paper we propose randomized first-order algorithms for solving bilinear saddle points problems. Our developments are motivated by the need for sublinear time algorithms to solve large-scale parametric bilinear saddle point problems where cheap online assessment of solution quality is crucial. We present the theoretical efficiency estimates of our algorithms and discuss a number … Read more

On Low Rank Matrix Approximations with Applications to Synthesis Problem in Compressed Sensing

We consider the synthesis problem of Compressed Sensing: given $s$ and an $M\times n$ matrix $A$, extract from $A$ an $m\times n$ submatrix $A_m$, with $m$ as small as possible, which is $s$-good, that is, every signal $x$ with at most $s$ nonzero entries can be recovered from observation $A_m x$ by $\ell_1$ minimization: $x … Read more

Verifiable conditions of $\ell_1hBcrecovery for sparse signals with sign restrictions

We propose necessary and sufficient conditions for a sensing matrix to be “$s$-semigood” — to allow for exact $\ell_1$-recovery of sparse signals with at most $s$ nonzero entries under sign restrictions on part of the entries. We express error bounds for imperfect $\ell_1$-recovery in terms of the characteristics underlying these conditions. These characteristics, although difficult … Read more