Fast convergence of inertial dynamics and algorithms with asymptotic vanishing damping

In a Hilbert space setting $\mathcal H$, we study the fast convergence properties as $t \to + \infty$ of the trajectories of the second-order differential equation \begin{equation*} \ddot{x}(t) + \frac{\alpha}{t} \dot{x}(t) + \nabla \Phi (x(t)) = g(t), \end{equation*} where $\nabla\Phi$ is the gradient of a convex continuously differentiable function $\Phi: \mathcal H \to \mathbb R$, … Read more

From error bounds to the complexity of first-order descent methods for convex functions

This paper shows that error bounds can be used as effective tools for deriving complexity results for first-order descent methods in convex minimization. In a first stage, this objective led us to revisit the interplay between error bounds and the Kurdyka-\L ojasiewicz (KL) inequality. One can show the equivalence between the two concepts for convex … Read more

An accelerated non-Euclidean hybrid proximal extragradient-type Algorithm for convex-concave saddle-point Problems

This paper describes an accelerated HPE-type method based on general Bregman distances for solving monotone saddle-point (SP) problems. The algorithm is a special instance of a non-Euclidean hybrid proximal extragradient framework introduced by Svaiter and Solodov [28] where the prox sub-inclusions are solved using an accelerated gradient method. It generalizes the accelerated HPE algorithm presented … Read more

New Computational Guarantees for Solving Convex Optimization Problems with First Order Methods, via a Function Growth Condition Measure

Motivated by recent work of Renegar, we present new computational methods and associated computational guarantees for solving convex optimization problems using first-order methods. Our problem of interest is the general convex optimization problem f^* = \min_{x \in Q} f(x), where we presume knowledge of a strict lower bound f_slb < f^*. [Indeed, f_slb is naturally ... Read more

Noisy Euclidean distance realization: robust facial reduction and the Pareto frontier

We present two algorithms for large-scale low-rank Euclidean distance matrix completion problems, based on semidefinite optimization. Our first method works by relating cliques in the graph of the known distances to faces of the positive semidefinite cone, yielding a combinatorial procedure that is provably robust and parallelizable. Our second algorithm is a first order method … Read more

A survey on operator splitting and decomposition of convex programs

Many structured convex minimization problems can be modeled by the search of a zero of the sum of two monotone operators. Operator splitting methods have been designed to decompose and regularize at the same time these kind of models. We review here these models and the classical splitting methods. We focus on the numerical sensitivity … Read more

Stochastic Approximations and Perturbations in Forward-Backward Splitting for Monotone Operators

We investigate the asymptotic behavior of a stochastic version of the forward-backward splitting algorithm for finding a zero of the sum of a maximally monotone set-valued operator and a cocoercive operator in Hilbert spaces. Our general setting features stochastic approximations of the cocoercive operator and stochastic perturbations in the evaluation of the resolvents of the … Read more

An optimal randomized incremental gradient method

In this paper, we consider a class of finite-sum convex optimization problems whose objective function is given by the summation of $m$ ($\ge 1$) smooth components together with some other relatively simple terms. We first introduce a deterministic primal-dual gradient (PDG) method that can achieve the optimal black-box iteration complexity for solving these composite optimization … Read more

Understanding the Convergence of the Alternating Direction Method of Multipliers: Theoretical and Computational Perspectives

The alternating direction of multipliers (ADMM) is a form of augmented Lagrangian algorithm that has experienced a renaissance in recent years due to its applicability to optimization problems arising from “big data” and image processing applications, and the relative ease with which it may be implemented in parallel and distributed computational environments. While it is … Read more

A semi-proximal-based strictly contractive Peaceman-Rachford splitting method

The Peaceman-Rachford splitting method is very efficient for minimizing sum of two functions each depends on its variable, and the constraint is a linear equality. However, its convergence was not guaranteed without extra requirements. Very recently, He et al. (SIAM J. Optim. 24: 1011 – 1040, 2014) proved the convergence of a strictly contractive Peaceman-Rachford … Read more