This paper shows that the OSGA algorithm -- which uses first-order information to solve convex optimization problems with optimal complexity -- can be used to efficiently solve arbitrary bound-constrained convex optimization problems. This is done by constructing an explicit method as well as an inexact scheme for solving the bound-constrained rational subproblem required by OSGA. This leads to an efficient implementation of OSGA on large-scale problems in applications arising signal and image processing, machine learning and statistics. Numerical experiments demonstrate the promising performance of OSGA on such problems. ions to show the efficiency of the proposed scheme. A software package implementing OSGA for bound-constrained convex problems is available.
Citation
Faculty of Mathematics, University of Vienna, Oskar-Morgenstern-Platz 1, 1090 Vienna, Austria, 2015
Article
View An optimal subgradient algorithm for large-scale bound-constrained convex optimization