I have used a variety of neural network training algorithms; backprop, genetic algorithms, particle swarm methods, etc. Different algorithms have different advantages and disadvantages. Would it make sense to switch back and forth between two or more algorithms as training proceeds? I have often found genetic algorithms to be slower than backprop. Might one start training with backprop to speed things up and then switch to a genetic algorithm later on to escape from any local minima?