New complexity results and algorithms for min-max-min robust combinatorial optimization

In this work we investigate the min-max-min robust optimization problem applied to combinatorial problems with uncertain cost-vectors which are contained in a convex uncertainty set. The idea of the approach is to calculate a set of k feasible solutions which are worst-case optimal if in each possible scenario the best of the k solutions would … Read more

Low-rank matrix recovery with composite optimization: good conditioning and rapid convergence

The task of recovering a low-rank matrix from its noisy linear measurements plays a central role in computational science. Smooth formulations of the problem often exhibit an undesirable phenomenon: the condition number, classically defined, scales poorly with the dimension of the ambient space. In contrast, we here show that in a variety of concrete circumstances, … Read more

Composite optimization for robust blind deconvolution

The blind deconvolution problem seeks to recover a pair of vectors from a set of rank one bilinear measurements. We consider a natural nonsmooth formulation of the problem and show that under standard statistical assumptions, its moduli of weak convexity, sharpness, and Lipschitz continuity are all dimension independent. This phenomenon persists even when up to … Read more

Parallelizing Subgradient Methods for the Lagrangian Dual in Stochastic Mixed-Integer Programming

The dual decomposition of stochastic mixed-integer programs can be solved by the projected subgradient algorithm. We show how to make this algorithm more amenable to parallelization in a master-worker model by describing two approaches, which can be combined in a natural way. The first approach partitions the scenarios into batches, and makes separate use of … Read more

Stochastic model-based minimization of weakly convex functions

We consider an algorithm that successively samples and minimizes stochastic models of the objective function. We show that under weak-convexity and Lipschitz conditions, the algorithm drives the expected norm of the gradient of the Moreau envelope to zero at the rate $O(k^{-1/4})$. Our result yields the first complexity guarantees for the stochastic proximal point algorithm … Read more

Convergence Rates for Deterministic and Stochastic Subgradient Methods Without Lipschitz Continuity

We generalize the classic convergence rate theory for subgradient methods to apply to non-Lipschitz functions via a new measure of steepness. For the deterministic projected subgradient method, we derive a global $O(1/\sqrt{T})$ convergence rate for any function with at most exponential growth. Our approach implies generalizations of the standard convergence rates for gradient descent on … Read more

The nonsmooth landscape of phase retrieval

We consider a popular nonsmooth formulation of the real phase retrieval problem. We show that under standard statistical assumptions, a simple subgradient method converges linearly when initialized within a constant relative distance of an optimal solution. Seeking to understand the distribution of the stationary points of the problem, we complete the paper by proving that … Read more

Radial Subgradient Descent

We present a subgradient method for minimizing non-smooth, non-Lipschitz convex optimization problems. The only structure assumed is that a strictly feasible point is known. We extend the work of Renegar [1] by taking a different perspective, leading to an algorithm which is conceptually more natural, has notably improved convergence rates, and for which the analysis … Read more

RSG: Beating Subgradient Method without Smoothness and Strong Convexity

In this paper, we study the efficiency of a {\bf R}estarted {\bf S}ub{\bf G}radient (RSG) method that periodically restarts the standard subgradient method (SG). We show that, when applied to a broad class of convex optimization problems, RSG method can find an $\epsilon$-optimal solution with a low complexity than SG method. In particular, we first … Read more