Primal-dual subgradient method for Huge-Scale Linear Conic Problems

In this paper we develop a {\em primal-dual} subgradient method for solving huge-scale Linear Conic Optimization Problems. Our main assumption is that the primal cone is formed as a direct product of many small-dimensional convex cones, and that the matrix $A$ of corresponding linear operator is {\em uniformly sparse}. In this case, our method can … Read more

Fast global convergence of gradient methods for high-dimensional statistical recovery

Many statistical $M$-estimators are based on convex optimization problems formed by the combination of a data-dependent loss function with a norm-based regularizer. We analyze the convergence rates of projected gradient and composite gradient methods for solving such problems, working within a high-dimensional framework that allows the data dimension $\pdim$ to grow with (and possibly exceed) … Read more

A Fair, Sequential Multiple Objective Optimization Algorithm

In multi-objective optimization the objective is to reach a point which is Pareto ecient. However we usually encounter many such points and choosing a point amongst them possesses another problem. In many applications we are required to choose a point having a good spread over all objective functions which is a direct consequence of the … Read more

Computational aspects of risk-averse optimisation in two-stage stochastic models

In this paper we argue for aggregated models in decomposition schemes for two-stage stochastic programming problems. We observe that analogous schemes proved effective for single-stage risk-averse problems, and for general linear programming problems. A major drawback of the aggregated approach for two-stage problems is that an aggregated master problem can not contain all the information … Read more

Convex computation of the region of attraction of polynomial control systems

We address the long-standing problem of computing the region of attraction (ROA) of a target set (typically a neighborhood of an equilibrium point) of a controlled nonlinear system with polynomial dynamics and semialgebraic state and input constraints. We show that the ROA can be computed by solving a convex linear programming (LP) problem over the … Read more

An adaptive accelerated first-order method for convex optimization

This paper presents a new accelerated variant of Nesterov’s method for solving composite convex optimization problems in which certain acceleration parameters are adaptively (and aggressively) chosen so as to substantially improve its practical performance compared to existing accelerated variants while at the same time preserve the optimal iteration-complexity shared by these methods. Computational results are … Read more

A first-order block-decomposition method for solving two-easy-block structured semidefinite programs

In this paper, we consider a first-order block-decomposition method for minimizing the sum of a convex differentiable function with Lipschitz continuous gradient, and two other proper closed convex (possibly, nonsmooth) functions with easily computable resolvents. The method presented contains two important ingredients from a computational point of view, namely: an adaptive choice of stepsize for … Read more

An acceleration procedure for optimal first-order methods

We introduce in this paper an optimal first-order method that allows an easy and cheap evaluation of the local Lipschitz constant of the objective’s gradient. This constant must ideally be chosen at every iteration as small as possible, while serving in an indispensable upper bound for the value of the objective function. In the previously … Read more

Optimal Stochastic Approximation Algorithms for Strongly Convex Stochastic Composite Optimization, II: Shrinking Procedures and Optimal Algorithms

In this paper we study new stochastic approximation (SA) type algorithms, namely, the accelerated SA (AC-SA), for solving strongly convex stochastic composite optimization (SCO) problems. Specifically, by introducing a domain shrinking procedure, we significantly improve the large-deviation results associated with the convergence rate of a nearly optimal AC-SA algorithm presented by the authors. Moreover, we … Read more