We present a method to tune software parameters using ideas from software testing and machine learning. The method is based on the key observation that for many classes of instances, the software shows improved performance if a few critical parameters have ``good'' values, although which parameters are critical depends on the class of instances. Our method attempts to find good parameter values using a relatively small number of optimization trials. We present tests of our method on three MILP solvers: CPLEX, CBC, and GLPK. In these tests, our method always finds parameter values that outperform the default values, in many cases by a significant margin. The improvement in total run time over default performance was generally from 31--88%. We also test similar instances that were not used for training and find similar improvements in run time. Our implementation of the method, Selection Tool for Optimization Parameters (STOP), is available under a free and open-source license.
Technical Report 2007-7 University of Pittsburgh Department of Industrial Engineering Pittsburgh, PA