This paper discusses several (sub)gradient methods attaining the optimal complexity for smooth problems with Lipschitz continuous gradients, nonsmooth problems with bounded variation of subgradients, weakly smooth problems with H\"older continuous gradients. The proposed schemes are optimal for smooth strongly convex problems with Lipschitz continuous gradients and optimal up to a logarithmic factor for nonsmooth problems with bounded variation of subgradients. More specifically, we propose two estimation sequences of the objective and give two iterative schemes for each of them. In both cases, the first scheme requires the smoothness parameter and the H\"older constant, while the second scheme is parameter-free (except for the strong convexity parameter which we set zero if it is not available) at the price of applying a nonmonotone backtracking line search. A complexity analysis for all the proposed schemes is given. Numerical results for some applications in sparse optimization and machine learning are reported, which confirm the theoretical foundations.
Citation
Faculty of Mathematics, University of Vienna, April 2016
Article
View Accelerated first-order methods for large-scale convex minimization