A first-order primal-dual algorithm with linesearch

The paper proposes a linesearch for the primal-dual method. Each iteration of the linesearch requires to update only the dual (or primal) variable. For many problems, in particular for regularized least squares, the linesearch does not require any additional matrix-vector multiplications. We prove convergence of the proposed method under the standard assumptions. We also show … Read more

Linearized Alternating Direction Method of Multipliers via Positive-Indefinite Proximal Regularization for Convex Programming

The alternating direction method of multipliers (ADMM) is being widely used for various convex minimization models with separable structures arising in a variety of areas. In the literature, the proximalversion of ADMM which allows ADMM’s subproblems to be proximally regularized has been well studied. Particularly the linearized version of ADMM can be yielded when the … Read more

Faster Alternating Direction Method of Multipliers with a Worst-case O(1/n^2) Convergence Rate

The alternating direction method of multipliers (ADMM) is being widely used for various convex programming models with separable structures arising in specifically many scientific computing areas. The ADMM’s worst-case O(1/n) convergence rate measured by the iteration complexity has been established in the literature when its penalty parameter is a constant, where n is the iteration … Read more

Exact Worst-case Performance of First-order Methods for Composite Convex Optimization

We provide a framework for computing the exact worst-case performance of any algorithm belonging to a broad class of oracle-based first-order methods for composite convex optimization, including those performing explicit, projected, proximal, conditional and inexact (sub)gradient steps. We simultaneously obtain tight worst-case guarantees and explicit instances of optimization problems on which the algorithm reaches this … Read more

Convergence Analysis of ISTA and FISTA for “Strongly + Semi” Convex Programming

The iterative shrinkage/thresholding algorithm (ISTA) and its faster version FISTA have been widely used in the literature. In this paper, we consider general versions of the ISTA and FISTA in the more general “strongly + semi” convex setting, i.e., minimizing the sum of a strongly convex function and a semiconvex function; and conduct convergence analysis … Read more

A unified convergence bound for conjugate gradient and accelerated gradient

Nesterov’s accelerated gradient method for minimizing a smooth strongly convex function $f$ is known to reduce $f(\x_k)-f(\x^*)$ by a factor of $\eps\in(0,1)$ after $k\ge O(\sqrt{L/\ell}\log(1/\eps))$ iterations, where $\ell,L$ are the two parameters of smooth strong convexity. Furthermore, it is known that this is the best possible complexity in the function-gradient oracle model of computation. The … Read more

An Algorithmic Framework of Generalized Primal-Dual Hybrid Gradient Methods for Saddle Point Problems

The primal-dual hybrid gradient method (PDHG) originates from the Arrow-Hurwicz method, and it has been widely used to solve saddle point problems, particularly in image processing areas. With the introduction of a combination parameter, Chambolle and Pock proposed a generalized PDHG scheme with both theoretical and numerical advantages. It has been analyzed that except for … Read more

A semi-proximal-based strictly contractive Peaceman-Rachford splitting method

The Peaceman-Rachford splitting method is very efficient for minimizing sum of two functions each depends on its variable, and the constraint is a linear equality. However, its convergence was not guaranteed without extra requirements. Very recently, He et al. (SIAM J. Optim. 24: 1011 – 1040, 2014) proved the convergence of a strictly contractive Peaceman-Rachford … Read more

On the convergence rate of an inexact proximal point algorithm for quasiconvex minimization on Hadamard manifolds

In this paper we present a rate of convergence analysis of an inexact proximal point algorithm to solve minimization problems for quasiconvex objective functions on Hadamard manifolds. We prove that under natural assumptions the sequence generated by the algorithm converges linearly or superlinearly to a critical point of the problem. Article Download View On the … Read more

Distributed Gradient Methods with Variable Number of Working Nodes

We consider distributed optimization where $N$ nodes in a connected network minimize the sum of their local costs subject to a common constraint set. We propose a distributed projected gradient method where each node, at each iteration $k$, performs an update (is active) with probability $p_k$, and stays idle (is inactive) with probability $1-p_k$. Whenever … Read more