RandProx: Primal-Dual Optimization Algorithms with Randomized Proximal Updates

Proximal splitting algorithms are well suited to solving large-scale nonsmooth optimization problems, in particular those arising in machine learning. We propose a new primal-dual algorithm, in which the dual update is randomized; equivalently, the proximity operator of one of the function in the problem is replaced by a stochastic oracle. For instance, some randomly chosen … Read more

Accelerated Bregman Proximal Gradient Methods for Relatively Smooth Convex Optimization

We consider the problem of minimizing the sum of two convex functions: one is differentiable and relatively smooth with respect to a reference convex function, and the other can be nondifferentiable but simple to optimize. The relatively smooth condition is much weaker than the standard assumption of uniform Lipschitz continuity of the gradients, thus significantly … Read more

Smooth minimization of nonsmooth functions with parallel coordinate descent methods

We study the performance of a family of randomized parallel coordinate descent methods for minimizing the sum of a nonsmooth and separable convex functions. The problem class includes as a special case L1-regularized L1 regression and the minimization of the exponential loss (“AdaBoost problem”). We assume the input data defining the loss function is contained … Read more

Separable Approximations and Decomposition Methods for the Augmented Lagrangian

In this paper we study decomposition methods based on separable approximations for minimizing the augmented Lagrangian. In particular, we study and compare the Diagonal Quadratic Approximation Method (DQAM) of Mulvey and Ruszczy\'{n}ski and the Parallel Coordinate Descent Method (PCDM) of Richt\'{a}rik and Tak\'{a}\v{c}. We show that the two methods are equivalent for feasibility problems up … Read more

Inexact Coordinate Descent: Complexity and Preconditioning

In this paper we consider the problem of minimizing a convex function using a randomized block coordinate descent method. One of the key steps at each iteration of the algorithm is determining the update to a block of variables. Existing algorithms assume that in order to compute the update, a particular subproblem is solved exactly. … Read more

Parallel Coordinate Descent Methods for Big Data Optimization

In this work we show that randomized (block) coordinate descent methods can be accelerated by parallelization when applied to the problem of minimizing the sum of a partially separable smooth convex function and a simple separable convex function. The theoretical speedup, as compared to the serial method, and referring to the number of iterations needed … Read more

Efficient Serial and Parallel Coordinate Descent Methods for Huge-Scale Truss Topology Design

In this work we propose solving huge-scale instances of the truss topology design problem with coordinate descent methods. We develop four efficient codes: serial and parallel implementations of randomized and greedy rules for the selection of the variable (potential bar) to be updated in the next iteration. Both serial methods enjoy an O(n/k) iteration complexity … Read more

Iteration Complexity of Randomized Block-Coordinate Descent Methods for Minimizing a Composite Function

In this paper we develop a randomized block-coordinate descent method for minimizing the sum of a smooth and a simple nonsmooth block-separable convex function and prove that it obtains an $\epsilon$-accurate solution with probability at least $1-\rho$ in at most $O(\tfrac{n}{\epsilon} \log \tfrac{1}{\rho})$ iterations, where $n$ is the number of blocks. For strongly convex functions … Read more

Improved algorithms for convex minimization in relative scale

In this paper we propose two modifications to Nesterov’s algorithms for minimizing convex functions in relative scale. The first is based on a bisection technique and leads to improved theoretical iteration complexity, and the second is a heuristic for avoiding restarting behavior. The fastest of our algorithms produces a solution within relative error O(1/k) of … Read more

Approximate Level Method

In this paper we propose and analyze a variant of the level method [4], which is an algorithm for minimizing nonsmooth convex functions. The main work per iteration is spent on 1) minimizing a piecewise-linear model of the objective function and on 2) projecting onto the intersection of the feasible region and a polyhedron arising … Read more