Variants of the A-HPE and large-step A-HPE algorithms for strongly convex problems with applications to accelerated high-order tensor methods

For solving strongly convex optimization problems, we propose and study the global convergence of variants of the A-HPE and large-step A-HPE algorithms of Monteiro and Svaiter. We prove \emph{linear} and the \emph{superlinear} $\mathcal{O}\left(k^{\,-k\left(\frac{p-1}{p+1}\right)}\right)$ global rates for the proposed variants of the A-HPE and large-step A-HPE methods, respectively. The parameter $p\geq 2$ appears in the (high-order) … Read more

A single potential governing convergence of conjugate gradient, accelerated gradient and geometric descent

Nesterov’s accelerated gradient (AG) method for minimizing a smooth strongly convex function $f$ is known to reduce $f({\bf x}_k)-f({\bf x}^*)$ by a factor of $\epsilon\in(0,1)$ after $k=O(\sqrt{L/\ell}\log(1/\epsilon))$ iterations, where $\ell,L$ are the two parameters of smooth strong convexity. Furthermore, it is known that this is the best possible complexity in the function-gradient oracle model of … Read more

First-order methods with inexact oracle: the strongly convex case

The goal of this paper is to study the effect of inexact first-order information on the first-order methods designed for smooth strongly convex optimization problems. We introduce the notion of (delta,L,mu)-oracle, that can be seen as an extension of the inexact (delta,L)-oracle previously introduced, taking into account strong convexity. We consider different examples of (delta,L,mu)-oracle: … Read more

OSGA: A fast subgradient algorithm with optimal complexity

This paper presents an algorithm for approximately minimizing a convex function in simple, not necessarily bounded convex domains, assuming only that function values and subgradients are available. No global information about the objective function is needed apart from a strong convexity parameter (which can be put to zero if only convexity is known). The worst … Read more