From studying neural networks (or logic circuits) we know that instead of training a single neural network with m inputs and n outputs one can train n separate networks each with only 1 output. So, if we have a programming problem that requires n outputs and we don't know how to solve it we could start by trying to solve the coding task for just 1 of the required outputs. If successful we could then try to work on other outputs. (Perhaps sticking to parallel processing?!) We could also eventually try to connect the separate solutions if needed ("term sharing").
With the kind of vector representations I use in Asa H generalization can be accomplished by deleting less important (smaller?) vector components and reducing the dot product similarity measure required for categorization. Specialization can be accomplished by adding vector components (during learning) and raising the similarity level needed for categorization.
(A category can be defined by specification of ranges over which each vector component (attribute) can vary. It need not be defined by specification of the dot product alone. i.e., one can use other similarity measures, etc.)
No comments:
Post a Comment