We present a new algorithm, Iterative Estimation Maximization (IEM), for stochastic linear programs with Conditional Value-at-Risk constraints. IEM iteratively constructs a sequence of compact-sized linear optimization problems, and solves them sequentially to find the optimal solution. The problem size IEM solves in each iteration is unaffected by the size of random samples, which makes it extreme efficient for real-world, large-scale problems. We prove that IEM converges to the true optimal solution, and give a lower bound on the number of samples required to probabilistically bound the solution error. Experiments show that IEM is an order of magnitude faster than the best known algorithm on large problem instances.