Exact convergence rate of the last iterate in subgradient methods

\(\)
We study the convergence of the last iterate in subgradient methods applied to the minimization of a nonsmooth convex function with bounded subgradients.
We first introduce a proof technique that generalizes the standard analysis of subgradient methods. It is based on tracking the distance between the current iterate and a different reference point at each iteration. Using this technique, we obtain the exact worst-case convergence rate for the objective accuracy of the last iterate of the projected subgradient method with either constant step sizes or constant step lengths. Tightness is shown with a worst-case instance matching the established convergence rate.
We also derive the value of the optimal constant step size when performing $N$ iterations, for which we find that the last iterate accuracy is smaller than \(B R \sqrt{1+\log(N)/4}/{\sqrt{N+1}}\), where \(B\) is a bound on the subgradient norm and $R$ is a bound on the distance between the initial iterate and a minimizer.
Finally, we introduce a new optimal subgradient method that achieves the best possible last-iterate accuracy after a given number \(N\) of iterations. Its convergence rate \({B R}/{\sqrt{N+1}}\) matches exactly the lower bound on the performance of any black-box method on the considered problem class. We also show that there is no universal sequence of step sizes that simultaneously achieves this optimal rate at each iteration, meaning that the dependence of the step size sequence in $N$ is unavoidable.






  

Article

Download

View Exact convergence rate of the last iterate in subgradient methods