Tuesday, May 22, 2018
Interpolation and attention
An artificial general intelligence like Asa H would require a huge casebase in order to operate autonomously in the real world. A buffer might store a small fraction of those cases. Ideally the cases in this buffer, Ci, would all be as close as possible to the current input vector, V. (As judged by the dot products of V and Cis, for example.) Any case, Ci, could be easily dropped out of the buffer if the latest input vector is now too different from it. It would be more difficult but (parallelized) search through the "full" casebase could replace dropped cases with closer matching ones, at least periodically. One could interpolate to the current input vector, V, from the set of cases, Ci, currently in the buffer memory. This would produce a set of weights for the various cases Ci. These weights could then be applied to the predictions for the next time step in each Ci and a best single prediction calculated and output. Weighting the contribution of each case, Ci, by its utility measure would also be possible.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment