An Algorithmic Framework of Generalized Primal-Dual Hybrid Gradient Methods for Saddle Point Problems

The primal-dual hybrid gradient method (PDHG) originates from the Arrow-Hurwicz method, and it has been widely used to solve saddle point problems, particularly in image processing areas. With the introduction of a combination parameter, Chambolle and Pock proposed a generalized PDHG scheme with both theoretical and numerical advantages. It has been analyzed that except for … Read more

A semi-proximal-based strictly contractive Peaceman-Rachford splitting method

The Peaceman-Rachford splitting method is very efficient for minimizing sum of two functions each depends on its variable, and the constraint is a linear equality. However, its convergence was not guaranteed without extra requirements. Very recently, He et al. (SIAM J. Optim. 24: 1011 – 1040, 2014) proved the convergence of a strictly contractive Peaceman-Rachford … Read more

On the convergence rate of an inexact proximal point algorithm for quasiconvex minimization on Hadamard manifolds

In this paper we present a rate of convergence analysis of an inexact proximal point algorithm to solve minimization problems for quasiconvex objective functions on Hadamard manifolds. We prove that under natural assumptions the sequence generated by the algorithm converges linearly or superlinearly to a critical point of the problem. ArticleDownload View PDF

Distributed Gradient Methods with Variable Number of Working Nodes

We consider distributed optimization where $N$ nodes in a connected network minimize the sum of their local costs subject to a common constraint set. We propose a distributed projected gradient method where each node, at each iteration $k$, performs an update (is active) with probability $p_k$, and stays idle (is inactive) with probability $1-p_k$. Whenever … Read more

Iteration Complexity Analysis of Multi-Block ADMM for a Family of Convex Minimization without Strong Convexity

The alternating direction method of multipliers (ADMM) is widely used in solving structured convex optimization problems due to its superior practical performance. On the theoretical side however, a counterexample was shown in [7] indicating that the multi-block ADMM for minimizing the sum of $N$ $(N\geq 3)$ convex functions with $N$ block variables linked by linear … Read more

Convergence rates for forward-backward dynamical systems associated with strongly monotone inclusions

We investigate the convergence rates of the trajectories generated by implicit first and second order dynamical systems associated to the determination of the zeros of the sum of a maximally monotone operator and a monotone and Lipschitz continuous one in a real Hilbert space. We show that these trajectories strongly converge with exponential rate to … Read more

On the Iteration Complexity of Some Projection Methods for Monotone Linear Variational Inequalities

Projection type methods are among the most important methods for solving monotone linear variational inequalities. In this note, we analyze the iteration complexity for two projection methods and accordingly establish their worst-case O(1/t) convergence rates measured by the iteration complexity in both the ergodic and nonergodic senses, where t is the iteration counter. Our analysis … Read more

Convergence Analysis of Primal-Dual Based Methods for Total Variation Minimization with Finite Element Approximation

We consider the total variation minimization model with consistent finite element discretization. It has been shown in the literature that this model can be reformulated as a saddle-point problem and be efficiently solved by the primal-dual method. The convergence for this application of the primal-dual method has also been analyzed. In this paper, we focus … Read more

Block-wise Alternating Direction Method of Multipliers with Gaussian Back Substitution for Multiple-block Convex Programming

We consider the linearly constrained convex minimization model with a separable objective function which is the sum of m functions without coupled variables, and discuss how to design an efficient algorithm based on the fundamental technique of splitting the augmented Lagrangian method (ALM). Our focus is the specific big-data scenario where m is huge. A … Read more

On the ergodic convergence rates of a first-order primal-dual algorithm

We revisit the proofs of convergence for a first order primal-dual algorithm for convex optimization which we have studied a few years ago. In particular, we prove rates of convergence for a more general version, with simpler proofs and more complete results. ArticleDownload View PDF