Sunday, October 27, 2013

Reinventing the wheel in every generation?

I just got back from the local AAPT conference (American Association of Physics Teachers).  Again there was some discussion of the modeling method of physics instruction (as promoted by Arizona State Univ.), though perhaps less than in past years.  While I am certianly an advocate of teaching the scientific method I don't think one can expect students to rediscover a substantial fraction of the physical models that we use.  How can a work of decades and centuries be performed in a semester?  Why would one want to?  Doesn't human society pass on its discoveries?  Can't we learn from others?

Classical mechanics was developed by Newton.  Newton was a genius.  My students are not. The average person must be able to learn from those who are smarter; learning things that they could never discover themselves.

Friday, October 25, 2013

Neural networks with multiple training algorithms

I have used a variety of neural network training algorithms; backprop, genetic algorithms, particle swarm methods, etc.  Different algorithms have different advantages and disadvantages.  Would it make sense to switch back and forth between two or more algorithms as training proceeds?  I have often found genetic algorithms to be slower than backprop.  Might one start training with backprop to speed things up and then switch to a genetic algorithm later on to escape from any local minima?

Monday, October 21, 2013

Paper AND electronic

The ideal library would have all holdings in both paper and electronic form.  Paper because of its advantages (Why the brain prefers paper, Ferris Jabr, Scientific American, pg 49, Nov. 2013) and electronic to allow for things like computerized searching and hypertext linking.

Tuesday, October 8, 2013

TinMan commercial software

The TinMan AI Builder software (TinMan Systems) helps you assemble and train modular and hierarchical neural networks.  I am not sure under what conditions this is easier or better than building a single simple three (or more) layer neural network with software like Brainmaker (California Scientific Software) or NeuroSolutions (NeuroDimension), for example.  It would be useful to compare the two side by side using each of their own example projects.

Monday, October 7, 2013

my connectionist AI

Researchers who come from a traditional AI background (symbolic AI, g.o.f.a.i.) tend to avoid connectionist algorithms and view connectionism as competition. Many AI textbooks spend only a small number of pages on neural networks. Typical AI conferences may contain only a few neural network talks/papers. Coming from a physics and numerical computing background I had no such bias.  Early on when I needed a nonlinear multivariable function approximation algorithm for my AI (Asa F 1.0) I was quite happy to try artificial neural network algorithms.  My physics and math backgrounds also made me comfortable with continuous mathematics rather than discrete math (though I soon began to use both, even in one and the same program).  I see connectionism as a useful source of algorithms.

Friday, October 4, 2013

Fusion history

I am reading Search for the Ultimate Energy Source by S. O. Dean (Springer, 2013).  Dean tells how the bureaucrats took control of the direction of the U.S. fusion energy program.  He also tells of the decline of the program.  I would suggest there is a causal relationship there. Europe is now in the lead in magnetic fusion energy research with the ITER and the U.S. effort in inertial confinement fusion just saw the failure of its NIF to reach ignition. I, personally, was unable to get funding for my RSX-IV device (Reactor Studies Experiment) and switched over to artificial intelligence research.  On page 24 Dean, who was head of (plasma) confinement systems for DOE, admits that "In reviewing many proposals over the years, I have observed that it is almost impossible to get a positive review of a proposal to pursue any idea that is not already being worked on in the government's own fusion program."  Clearly a formula for failure. No new ideas are allowed. And this in what is intended to be a research organization.