Convergence Rate of Projected Subgradient Method with Time-varying Step-sizes

We establish the optimal ergodic convergence rate for the classical projected subgradient method with time-varying step-sizes. This convergence rate remains the same even if we slightly increase the weight of the most recent points, thereby relaxing the ergodic sense. Article Download View Convergence Rate of Projected Subgradient Method with Time-varying Step-sizes

Bundle methods in the XXIst century: A bird’s-eye view

Bundle methods are often the algorithms of choice for nonsmooth convex optimization, especially if accuracy in the solution and reliability are a concern. We review several algorithms based on the bundle methodology that have been developed recently and that, unlike their forerunner variants, have the ability to provide exact solutions even if most of the … Read more

Simultaneously solving seven optimization problems in relative scale

In this paper we develop and analyze an efficient algorithm which solves seven related optimization problems simultaneously, in relative scale. Each iteration of our method is very cheap, with main work spent on matrix-vector multiplication. We prove that if a certain sequence generated by the algorithm remains bounded, then the method must terminate in $O(1/\delta)$ … Read more