Approximate Versions of the Alternating Direction Method of Multipliers

We present three new approximate versions of alternating direction method of multipliers (ADMM), all of which require only knowledge of subgradients of the subproblem objectives, rather than bounds on the distance to the exact subproblem solution. One version, which applies only to certain common special cases, is based on combining the operator-splitting analysis of the … Read more

Improved pointwise iteration-complexity of a regularized ADMM and of a regularized non-Euclidean HPE framework

This paper describes a regularized variant of the alternating direction method of multipliers (ADMM) for solving linearly constrained convex programs. It is shown that the pointwise iteration-complexity of the new method is better than the corresponding one for the standard ADMM method and that, up to a logarithmic term, is identical to the ergodic iteration-complexity … Read more

A Stochastic Majorize-Minimize Subspace Algorithm for Online Penalized Least Squares Estimation

Stochastic approximation techniques play an important role in solving many problems encountered in machine learning or adaptive signal processing. In these contexts, the statistics of the data are often unknown a priori or their direct computation is too intensive, and they have thus to be estimated online from the observed signals. For batch optimization of … Read more

Regularized Interior Proximal Alternating Direction Method for Separable Convex Optimization Problems

In this article we present a version of the proximal alternating direction method for a convex problem with linear constraints and a separable objective function, in which the standard quadratic regularizing term is replaced with an interior proximal metric for those variables that are required to satisfy some additional convex constraints. Moreover, the proposed method … Read more

Accelerated First-Order Methods for Hyperbolic Programming

A framework is developed for applying accelerated methods to general hyperbolic programming, including linear, second-order cone, and semidefinite programming as special cases. The approach replaces a hyperbolic program with a convex optimization problem whose smooth objective function is explicit, and for which the only constraints are linear equations (one more linear equation than for the … Read more

Polytope conditioning and linear convergence of the Frank-Wolfe algorithm

It is known that the gradient descent algorithm converges linearly when applied to a strongly convex function with Lipschitz gradient. In this case the algorithm’s rate of convergence is determined by condition number of the function. In a similar vein, it has been shown that a variant of the Frank-Wolfe algorithm with away steps converges … Read more

Solving Conic Systems via Projection and Rescaling

We propose a simple {\em projection and algorithm} to solve the feasibility problem \[ \text{ find } x \in L \cap \Omega, \] where $L$ and $\Omega$ are respectively a linear subspace and the interior of a symmetric cone in a finite-dimensional vector space $V$. This projection and rescaling algorithm is inspired by previous work … Read more

New Douglas-Rachford algorithmic structures and their convergence analyses

In this paper we study new algorithmic structures with Douglas- Rachford (DR) operators to solve convex feasibility problems. We propose to embed the basic two-set-DR algorithmic operator into the String-Averaging Projections (SAP) and into the Block-Iterative Pro- jection (BIP) algorithmic structures, thereby creating new DR algo- rithmic schemes that include the recently proposed cyclic Douglas- … Read more

Generalized Conjugate Gradient Methods for $\ell_1$ Regularized Convex Quadratic Programming with Finite Convergence

The conjugate gradient (CG) method is an efficient iterative method for solving large-scale strongly convex quadratic programming (QP). In this paper we propose some generalized CG (GCG) methods for solving the $\ell_1$-regularized (possibly not strongly) convex QP that terminate at an optimal solution in a finite number of iterations. At each iteration, our methods first … Read more

Acceleration of the PDHGM on strongly convex subspaces

We propose several variants of the primal-dual method due to Chambolle and Pock. Without requiring full strong convexity of the objective functions, our methods are accelerated on subspaces with strong convexity. This yields mixed rates, $O(1/N^2)$ with respect to initialisation and $O(1/N)$ with respect to the dual sequence, and the residual part of the primal … Read more