The stochastic Ravine accelerated gradient method with general extrapolation coefficients

Abstract: In a real Hilbert space domain setting, we study the convergence properties of the stochastic Ravine accelerated gradient method for convex differentiable optimization. We consider the general form of this algorithm where the extrapolation coefficients can vary with each iteration, and where the evaluation of the gradient is subject to random errors. This general … Read more

Fast convex optimization via inertial dynamics with Hessian driven damping

We first study the fast minimization properties of the trajectories of the second-order evolution equation \begin{equation*} \ddot{x}(t) + \frac{\alpha}{t} \dot{x}(t) + \beta \nabla^2 \Phi (x(t))\dot{x} (t) + \nabla \Phi (x(t)) = 0, \end{equation*} where $\Phi : \mathcal H \to \mathbb R$ is a smooth convex function acting on a real Hilbert space $\mathcal H$, and … Read more

The rate of convergence of Nesterov’s accelerated forward-backward method is actually (k^{-2})$

The {\it forward-backward algorithm} is a powerful tool for solving optimization problems with a {\it additively separable} and {\it smooth} + {\it nonsmooth} structure. In the convex setting, a simple but ingenious acceleration scheme developed by Nesterov has been proved useful to improve the theoretical rate of convergence for the function values from the standard … Read more

Fast convergence of inertial dynamics and algorithms with asymptotic vanishing damping

In a Hilbert space setting $\mathcal H$, we study the fast convergence properties as $t \to + \infty$ of the trajectories of the second-order differential equation \begin{equation*} \ddot{x}(t) + \frac{\alpha}{t} \dot{x}(t) + \nabla \Phi (x(t)) = g(t), \end{equation*} where $\nabla\Phi$ is the gradient of a convex continuously differentiable function $\Phi: \mathcal H \to \mathbb R$, … Read more

A dynamic approach to a proximal-Newton method for monotone inclusions in Hilbert spaces, with complexity $\bigo(1/n^2)$

In a Hilbert setting, we introduce a new dynamic system and associated algorithms aimed at solving by rapid methods, monotone inclusions. Given a maximal monotone operator $A$, the evolution is governed by the time dependent operator $I -(I + \lambda(t) {A})^{-1}$, where, in the resolvent, the positive control parameter $\lambda(t)$ tends to infinity as $t … Read more

A dynamic gradient approach to Pareto optimization with nonsmooth nonconvex objective functions

In a general Hilbert framework, we consider continuous gradient-like dynamical systems for constrained multiobjective optimization involving non-smooth convex objective functions. Our approach is in the line of a previous work where was considered the case of convex di erentiable objective functions. Based on the Yosida regularization of the subdi erential operators involved in the system, we obtain … Read more

A continuous gradient-like dynamical approach to Pareto-optimization in Hilbert spaces

In a Hilbert space setting, we consider new continuous gradient-like dynamical systems for constrained multiobjective optimization. This type of dynamics was first investigated by Cl. Henry, and B. Cornet, as a model of allocation of resources in economics. Based on the Yosida regularization of the discontinuous part of the vector field which governs the system, … Read more

Convergence of descent methods for semi-algebraic and tame problems: proximal algorithms, forward-backward splitting, and regularized Gauss-Seidel methods

In view of the minimization of a nonsmooth nonconvex function f, we prove an abstract convergence result for descent methods satisfying a sufficient-decrease assumption, and allowing a relative error tolerance. Our result guarantees the convergence of bounded sequences, under the assumption that the function f satisfies the Kurdyka-Lojasiewicz inequality. This assumption allows to cover a … Read more

A Continuous Dynamical Newton-Like Approach to Solving Monotone Inclusions

We introduce non-autonomous continuous dynamical systems which are linked to Newton and Levenberg-Marquardt methods. They aim at solving inclusions governed by maximal monotone operators in Hilbert spaces. Relying on Minty representation of maximal monotone operators as lipschitzian manifolds, we show that these dynamics can be formulated as first-order in time differential systems, which are relevant … Read more

Alternating proximal algorithms for constrained variational inequalities. Application to domain decomposition for PDE’s

Let $\cX,\cY,\cZ$ be real Hilbert spaces, let $f : \cX \rightarrow \R\cup\{+\infty\}$, $g : \cY \rightarrow \R\cup\{+\infty\}$ be closed convex functions and let $A : \cX \rightarrow \cZ$, $B : \cY \rightarrow \cZ$ be linear continuous operators. Let us consider the constrained minimization problem $$ \min\{f(x)+g(y):\quad Ax=By\}.\leqno (\cP)$$ Given a sequence $(\gamma_n)$ which tends toward … Read more