Convergence Results for Primal-Dual Algorithms in the Presence of Adjoint Mismatch

Most optimization problems arising in imaging science involve high-dimensional linear operators and their adjoints. In the implementations of these operators, approximations may be introduced for various practical considerations (e.g., memory limitation, computational cost, convergence speed), leading to an adjoint mismatch. This occurs for the X-ray tomographic inverse problems found in Computed Tomography (CT), where the … Read more

On Quasi-Newton Forward–Backward Splitting: Proximal Calculus and Convergence

We introduce a framework for quasi-Newton forward–backward splitting algorithms (proximal quasi-Newton methods) with a metric induced by diagonal +/- rank-r symmetric positive definite matrices. This special type of metric allows for a highly efficient evaluation of the proximal mapping. The key to this efficiency is a general proximal calculus in the new metric. By using … Read more

A forward-backward penalty scheme with inertial effects for montone inclusions. Applications to convex bilevel programming

We investigate forward-backward splitting algorithm of penalty type with inertial effects for finding the zeros of the sum of a maximally monotone operator and a cocoercive one and the convex normal cone to the set of zeroes of an another cocoercive operator. Weak ergodic convergence is obtained for the iterates, provided that a condition express … Read more

Non-smooth Non-convex Bregman Minimization: Unification and new Algorithms

We propose a unifying algorithm for non-smooth non-convex optimization. The algorithm approximates the objective function by a convex model function and finds an approximate (Bregman) proximal point of the convex model. This approximate minimizer of the model function yields a descent direction, along which the next iterate is found. Complemented with an Armijo-like line search … Read more

Convergence rate analysis of the forward-Douglas-Rachford splitting scheme

Operator splitting schemes are a class of powerful algorithms that solve complicated monotone inclusion and convex optimization problems that are built from many simpler pieces. They give rise to algorithms in which all simple pieces of the decomposition are processed individually. This leads to easily implementable and highly parallelizable or distributed algorithms, which often obtain … Read more

Convergence rate analysis of primal-dual splitting schemes

Primal-dual splitting schemes are a class of powerful algorithms that solve complicated monotone inclusions and convex optimization problems that are built from many simpler pieces. They decompose problems that are built from sums, linear compositions, and infimal convolutions of simple functions so that each simple term is processed individually via proximal mappings, gradient mappings, and … Read more

Block-wise Alternating Direction Method of Multipliers for Multiple-block Convex Programming and Beyond

The alternating direction method of multipliers (ADMM) is a benchmark for solving a linearly constrained convex minimization model with a two-block separable objective function; and it has been shown that its direct extension to a multiple-block case where the objective function is the sum of more than two functions is not necessarily convergent. For the … Read more

Local Linear Convergence of Forward–Backward under Partial Smoothness

In this paper, we consider the Forward–Backward proximal splitting algorithm to minimize the sum of two proper closed convex functions, one of which having a Lipschitz–continuous gradient and the other being partly smooth relatively to an active manifold $\mathcal{M}$. We propose a unified framework in which we show that the Forward–Backward (i) correctly identifies the … Read more

Convergence of descent methods for semi-algebraic and tame problems: proximal algorithms, forward-backward splitting, and regularized Gauss-Seidel methods

In view of the minimization of a nonsmooth nonconvex function f, we prove an abstract convergence result for descent methods satisfying a sufficient-decrease assumption, and allowing a relative error tolerance. Our result guarantees the convergence of bounded sequences, under the assumption that the function f satisfies the Kurdyka-Lojasiewicz inequality. This assumption allows to cover a … Read more

Convergence rate of inexact proximal point methods with relative error criteria for convex optimization

In this paper, we consider a class of inexact proximal point methods for convex optimization which allows a relative error tolerance in the approximate solution of each proximal subproblem. By exploiting the special structure of convex optimization problems, we are able to derive surprising complexity bounds for the aforementioned class. As a consequence, we show … Read more