Inertial Proximal ADMM for Linearly Constrained Separable Convex Optimization

The \emph{alternating direction method of multipliers} (ADMM) is a popular and efficient first-order method that has recently found numerous applications, and the proximal ADMM is an important variant of it. The main contributions of this paper are the proposition and the analysis of a class of inertial proximal ADMMs, which unify the basic ideas of … Read more

A general inertial proximal point algorithm for mixed variational inequality problem

In this paper, we first propose a general inertial \emph{proximal point algorithm} (PPA) for the mixed \emph{variational inequality} (VI) problem. Based on our knowledge, without stronger assumptions, convergence rate result is not known in the literature for inertial type PPAs. Under certain conditions, we are able to establish the global convergence and nonasymptotic $O(1/k)$ convergence … Read more

Low-rank spectral optimization

Various applications in signal processing and machine learning give rise to highly structured spectral optimization problems characterized by low-rank solutions. Two important examples that motivate this work are optimization problems from phase retrieval and from blind deconvolution, which are designed to yield rank-1 solutions. An algorithm is described based solving a certain constrained eigenvalue optimization … Read more

On the Convergence of Multi-Block Alternating Direction Method of Multipliers and Block Coordinate Descent Method

The paper answers several open questions of the alternating direction method of multipliers (ADMM) and the block coordinate descent (BCD) method that are now wildly used to solve large scale convex optimization problems in many fields. For ADMM, it is still lack of theoretical understanding of the algorithm when the objective function is not separable … Read more

A survey on operator splitting and decomposition of convex programs

Many structured convex minimization problems can be modeled by the search of a zero of the sum of two monotone operators. Operator splitting methods have been designed to decompose and regularize at the same time these kind of models. We review here these models and the classical splitting methods. We focus on the numerical sensitivity … Read more

Bounded perturbation resilience of projected scaled gradient methods

We investigate projected scaled gradient (PSG) methods for convex minimization problems. These methods perform a descent step along a diagonally scaled gradient direction followed by a feasibility regaining step via orthogonal projection onto the constraint set. This constitutes a generalized algorithmic structure that encompasses as special cases the gradient projection method, the projected Newton method, … Read more

Vanishing Price of Anarchy in Large Coordinative Nonconvex Optimization

We focus on a class of nonconvex cooperative optimization problems that involve multiple participants. We study the duality framework and provide geometric and analytic character- izations of the duality gap. The dual problem is related to a market setting in which each participant pursuits self interests at a given price of common goods. The duality … Read more

On the von Neumann and Frank-Wolfe Algorithms with Away Steps

The von Neumann algorithm is a simple coordinate-descent algorithm to determine whether the origin belongs to a polytope generated by a finite set of points. When the origin is in the interior of the polytope, the algorithm generates a sequence of points in the polytope that converges linearly to zero. The algorithm’s rate of convergence … Read more

Asynchronous Block-Iterative Primal-Dual Decomposition Methods for Monotone Inclusions

We propose new primal-dual decomposition algorithms for solving systems of inclusions involving sums of linearly composed maximally monotone operators. The principal innovation in these algorithms is that they are block-iterative in the sense that, at each iteration, only a subset of the monotone operators needs to be processed, as opposed to all operators as in … Read more

An optimal randomized incremental gradient method

In this paper, we consider a class of finite-sum convex optimization problems whose objective function is given by the summation of $m$ ($\ge 1$) smooth components together with some other relatively simple terms. We first introduce a deterministic primal-dual gradient (PDG) method that can achieve the optimal black-box iteration complexity for solving these composite optimization … Read more