An O(1/k) Convergence Rate for the Variable Stepsize Bregman Operator Splitting Algorithm

An earlier paper proved the convergence of a variable stepsize Bregman operator splitting algorithm (BOSVS) for minimizing $\phi(Bu)+H(u)$ where $H$ and $\phi$ are convex functions, and $\phi$ is possibly nonsmooth. The algorithm was shown to be relatively efficient when applied to partially parallel magnetic resonance image reconstruction problems. In this paper, the convergence rate of … Read more

Randomized First-order Methods for Saddle Point Optimization

In this paper, we present novel randomized algorithms for solving saddle point problems whose dual feasible region is a direct product of many convex sets. Our algorithms can achieve ${\cal O}(1/N)$ rate of convergence by solving only one dual subproblem at each iteration. Our algorithms can also achieve ${\cal O}(1/N^2)$ rate of convergence if a … Read more

Convergence Analysis of Primal-Dual Based Methods for Total Variation Minimization with Finite Element Approximation

We consider the total variation minimization model with consistent finite element discretization. It has been shown in the literature that this model can be reformulated as a saddle-point problem and be efficiently solved by the primal-dual method. The convergence for this application of the primal-dual method has also been analyzed. In this paper, we focus … Read more

On the ergodic convergence rates of a first-order primal-dual algorithm

We revisit the proofs of convergence for a first order primal-dual algorithm for convex optimization which we have studied a few years ago. In particular, we prove rates of convergence for a more general version, with simpler proofs and more complete results. ArticleDownload View PDF

Preconditioning of Active-Set Newton Methods for PDE-constrained Optimal Control Problems

We address the problem of preconditioning a sequence of saddle point linear systems arising in the solution of PDE-constrained optimal control problems via active-set Newton methods, with control and (regularized) state constraints. We present two new preconditioners based on a full block matrix factorization of the Schur complement of the Jacobian matrices, where the active-set … Read more

An accelerated HPE-type algorithm for a class of composite convex-concave saddle-point problems

This article proposes a new algorithm for solving a class of composite convex-concave saddle-point problems. The new algorithm is a special instance of the hybrid proximal extragradient framework in which a Nesterov’s accelerated variant is used to approximately solve the prox subproblems. One of the advantages of the new method is that it works for … Read more

A note on Fejér-monotone sequences in product spaces and its applications to the dual convergence of augmented Lagrangian methods

In a recent Math. Program. paper, Eckstein and Silva proposed a new error criterion for the approximate solutions of augmented Lagrangian subproblems. Based on a saddle-point formulation of the primal and dual problems, they proved that dual sequences generated by augmented Lagrangians under this error criterion are bounded and that theirs limit points are dual … Read more

Accelerating block-decomposition first-order methods for solving composite saddle-point and two-player Nash equilibrium problems

This article considers the two-player composite Nash equilibrium (CNE) problem with a separable non-smooth part, which is known to include the composite saddle-point (CSP) problem as a special case. Due to its two-block structure, this problem can be solved by any algorithm belonging to the block-decomposition hybrid proximal-extragradient (BD-HPE) framework. The framework consists of a … Read more

A primal-dual splitting method for convex optimization involving Lipschitzian, proximable and linear composite terms

We propose a new first-order splitting algorithm for solving jointly the primal and dual formulations of large-scale convex minimization problems involving the sum of a smooth function with Lipschitzian gradient, a nonsmooth proximable function, and linear composite functions. This is a full splitting approach in the sense that the gradient and the linear operators involved … Read more

Customized proximal point algorithms for linearly constrained convex minimization and saddle-point problems: a uniform approach

This paper takes a uniform look at the customized applications of proximal point algorithm (PPA) to two classes of problems: the linearly constrained convex minimization problem with a generic or separable objective function and a saddle-point problem. We model these two classes of problems uniformly by a mixed variational inequality, and show how PPA with … Read more