We study a class of linear programming (LP) problems motivated by large-scale machine learning applications. After reformulating the LP as a convex nonsmooth problem, we apply Nesterov's primal-dual smoothing technique. It turns out that the iteration complexity of the smoothing technique depends on a parameter $\th$ that arises because we need to bound the originally unbounded primal feasible set. We design a strategy that dynamically updates $\th$ to speed up the convergence. The application of our algorithm to two machine learning problems demonstrates several advantages of the smoothing technique over existing methods.
Argonne Preprint ANL/MCS-P1971-1011, November 2011.