Zero-Convex Functions, Perturbation Resilience, and Subgradient Projections for Feasibility-Seeking Methods

The convex feasibility problem (CFP) is at the core of the modeling of many problems in various areas of science. Subgradient projection methods are important tools for solving the CFP because they enable the use of subgradient calculations instead of orthogonal projections onto the individual sets of the problem. Working in a real Hilbert space, … Read more

An accelerated HPE-type algorithm for a class of composite convex-concave saddle-point problems

This article proposes a new algorithm for solving a class of composite convex-concave saddle-point problems. The new algorithm is a special instance of the hybrid proximal extragradient framework in which a Nesterov’s accelerated variant is used to approximately solve the prox subproblems. One of the advantages of the new method is that it works for … Read more

Stochastic Quasi-Fejér Block-Coordinate Fixed Point Iterations with Random Sweeping

This work investigates the properties of stochastic quasi-Fejér monotone sequences in Hilbert spaces and emphasizes their pertinence in the study of the convergence of block-coordinate fixed point methods. The iterative methods under investigation feature random sweeping rules to select the blocks of variables that are activated over the course of the iterations and allow for … Read more

Forward – Backward Greedy Algorithms for Atomic – Norm Regularization

In many signal processing applications, one aims to reconstruct a signal that has a simple representation with respect to a certain basis or frame. Fundamental elements of the basis known as “atoms” allow us to define “atomic norms” that can be used to construct convex regularizers for the reconstruction problem. Efficient algorithms are available to … Read more

Convergence Rates with Inexact Nonexpansive Operators

In this paper, we present a convergence rate analysis for the inexact Krasnosel’ski{\u{\i}}-Mann iteration built from nonexpansive operators. Our results include two main parts: we first establish global pointwise and ergodic iteration-complexity bounds, and then, under a metric subregularity assumption, we establish local linear convergence for the distance of the iterates to the set of … Read more

An inertial alternating direction method of multipliers

In the context of convex optimization problems in Hilbert spaces, we induce inertial effects into the classical ADMM numerical scheme and obtain in this way so-called inertial ADMM algorithms, the convergence properties of which we investigate into detail. To this aim we make use of the inertial version of the Douglas-Rachford splitting method for monotone … Read more

On the Direct Extension of ADMM for Multi-block Separable Convex Programming and Beyond: From Variational Inequality Perspective

When the alternating direction method of multipliers (ADMM) is extended directly to a multi-block separable convex minimization model whose objective function is in form of more than two functions without coupled variables, it was recently shown that the convergence is not guaranteed. This fact urges to develop efficient algorithms that can preserve completely the numerical … Read more

Finding the largest low-rank clusters with Ky Fan 2-k-norm and l1-norm

We propose a convex optimization formulation with the Ky Fan 2-k-norm and l1-norm to fi nd k largest approximately rank-one submatrix blocks of a given nonnegative matrix that has low-rank block diagonal structure with noise. We analyze low-rank and sparsity structures of the optimal solutions using properties of these two matrix norms. We show that, under … Read more

Intermediate gradient methods for smooth convex problems with inexact oracle

Between the robust but slow (primal or dual) gradient methods and the fast but sensitive to errors fast gradient methods, our goal in this paper is to develop first-order methods for smooth convex problems with intermediate speed and intermediate sensitivity to errors. We develop a general family of first-order methods, the Intermediate Gradient Method (IGM), … Read more

First-order methods with inexact oracle: the strongly convex case

The goal of this paper is to study the effect of inexact first-order information on the first-order methods designed for smooth strongly convex optimization problems. We introduce the notion of (delta,L,mu)-oracle, that can be seen as an extension of the inexact (delta,L)-oracle previously introduced, taking into account strong convexity. We consider different examples of (delta,L,mu)-oracle: … Read more