Science consists of a cycle of theory and experiment. Experiments are needed to "keep us honest." I have criticized Hutter's AIXI (Universal Artificial Intelligence, Springer, 2005) for offering no executables. (Joel Veness, Hutter, et al, now have provided pseudocode in "A Monte-Carlo AIXI Approximation", Dec. 2010 and C++ code on Veness' website, jveness.info/)
But it's hard to get something more than qualitative results from AI experiments. In my Asa H I have a choice of many clustering algorithms (or a combination of several of them), various similarity measures, a variety of feature extraction algorithms, and multiple extrapolation/learning algorithms. I can also adjust the amount of processing time devoted to learning and to each kind of learning. There are also a number of thresholds/learning rates/free parameters to set. I have found that in these ways AI experimentation is much harder than the pure physics experiments I did 20 years ago.