Other deep learners typically have a fixed number of layers, a fixed number of nodes, a fixed number of nodes per given layer, etc. Geoffrey Hinton's ImageNet, for instance, had 7 layers, 650,000 nodes, total, and a fixed number of nodes in each given layer. Asa H, on the other hand, adds layers and cases/concepts as it learns, and the number of cases per layer varies with time as Asa learns.
It is an advantage of Asa H over humans that it can add memory and processors as and when it needs them.
No comments:
Post a Comment