Typical Asa H light software (see my blogs of 10 Feb. 2011 and 14 May 2012) allows for simple adjustments to learning by setting parameters like L and skip. More complex software packages allow Asa to observe the amount of time it spends taking input, giving output, searching the case base, performing feature extraction, adding to memory, sorting memory, comparing, extrapolating, doing deduction, doing simulation, case updating, etc. and then correlate these efforts with the utility (rewards) observed/received over time (see chapter 0ne of my book Twelve Papers, the section titled self monitoring www.robert-w-jones.com). Parameters like L and skip are, themselves, made inputs to the hierarchical memory and Asa learns a vector/concept like:
thought = (search, deduction, simulation, sorting, extrapolating, comparing, remembering, etc.).
Asa can be allowed to adjust the learning itself by making the parameters outputs of the memory hierarchy. Thinking can come to constitute a part of Asa's concept of its self:
self = (sense, act, health, thought).
This is a further evolution of Asa's self concept. Asa can observe some of its own thought processes.
In interaction with the world I have tried to give Asa the same sort of sensations and behaviors that a human might experience. If Wittgenstein is right this might be necessary if humans and AIs are to understand each other. But what Asa sees of its own thought processes is quite different from what humans know of their own inner thoughts. Will this prove to be a problem? Might the same thing be true if we met space aliens?
No comments:
Post a Comment