A Family of Subgradient-Based Methods for Convex Optimization Problems in a Unifying Framework

We propose a new family of subgradient- and gradient-based methods which converges with optimal complexity for convex optimization problems whose feasible region is simple enough. This includes cases where the objective function is non-smooth, smooth, have composite/saddle structure, or are given by an inexact oracle model. We unified the way of constructing the subproblems which … Read more

Dual Averaging Methods for Regularized Stochastic Learning and Online Optimization

We consider regularized stochastic learning and online optimization problems, where the objective function is the sum of two convex terms: one is the loss function of the learning task, and the other is a simple regularization term such as $\ell_1$-norm for promoting sparsity. We develop extensions of Nesterov’s dual averaging method, that can exploit the … Read more