A Stochastic Majorize-Minimize Subspace Algorithm for Online Penalized Least Squares Estimation

Stochastic approximation techniques play an important role in solving many problems encountered in machine learning or adaptive signal processing. In these contexts, the statistics of the data are often unknown a priori or their direct computation is too intensive, and they have thus to be estimated online from the observed signals. For batch optimization of … Read more

Improved pointwise iteration-complexity of a regularized ADMM and of a regularized non-Euclidean HPE framework

This paper describes a regularized variant of the alternating direction method of multipliers (ADMM) for solving linearly constrained convex programs. It is shown that the pointwise iteration-complexity of the new method is better than the corresponding one for the standard ADMM method and that, up to a logarithmic term, is identical to the ergodic iteration-complexity … Read more

Regularized Interior Proximal Alternating Direction Method for Separable Convex Optimization Problems

In this article we present a version of the proximal alternating direction method for a convex problem with linear constraints and a separable objective function, in which the standard quadratic regularizing term is replaced with an interior proximal metric for those variables that are required to satisfy some additional convex constraints. Moreover, the proposed method … Read more

Distributed Stochastic Variance Reduced Gradient Methods and a Lower Bound for Communication Complexity

We study distributed optimization algorithms for minimizing the average of convex functions. The applications include empirical risk minimization problems in statistical machine learning where the datasets are large and have to be stored on different machines. We design a distributed stochastic variance reduced gradient algorithm that, under certain conditions on the condition number, simultaneously achieves … Read more

Accelerated First-Order Methods for Hyperbolic Programming

A framework is developed for applying accelerated methods to general hyperbolic programming, including linear, second-order cone, and semidefinite programming as special cases. The approach replaces a hyperbolic program with a convex optimization problem whose smooth objective function is explicit, and for which the only constraints are linear equations (one more linear equation than for the … Read more

Polytope conditioning and linear convergence of the Frank-Wolfe algorithm

It is known that the gradient descent algorithm converges linearly when applied to a strongly convex function with Lipschitz gradient. In this case the algorithm’s rate of convergence is determined by condition number of the function. In a similar vein, it has been shown that a variant of the Frank-Wolfe algorithm with away steps converges … Read more

Solving Conic Systems via Projection and Rescaling

We propose a simple {\em projection and algorithm} to solve the feasibility problem \[ \text{ find } x \in L \cap \Omega, \] where $L$ and $\Omega$ are respectively a linear subspace and the interior of a symmetric cone in a finite-dimensional vector space $V$. This projection and rescaling algorithm is inspired by previous work … Read more

Perturbation of error bounds

Our aim in the current article is to extend the developments in Kruger, Ngai & Th\’era, SIAM J. Optim. 20(6), 3280–3296 (2010) and, more precisely, to characterize, in the Banach space setting, the stability of the local and global error bound property of inequalities determined by proper lower semicontinuous under data perturbations. We propose new … Read more

SMART: The Stochastic Monotone Aggregated Root-Finding Algorithm

We introduce the Stochastic Monotone Aggregated Root-Finding (SMART) algorithm, a new randomized operator-splitting scheme for finding roots of finite sums of operators. These algorithms are similar to the growing class of incremental aggregated gradient algorithms, which minimize finite sums of functions; the difference is that we replace gradients of functions with black-boxes called operators, which … Read more

New Douglas-Rachford algorithmic structures and their convergence analyses

In this paper we study new algorithmic structures with Douglas- Rachford (DR) operators to solve convex feasibility problems. We propose to embed the basic two-set-DR algorithmic operator into the String-Averaging Projections (SAP) and into the Block-Iterative Pro- jection (BIP) algorithmic structures, thereby creating new DR algo- rithmic schemes that include the recently proposed cyclic Douglas- … Read more