Tuesday, May 22, 2018

Interpolation and attention

An artificial general intelligence like Asa H would require a huge casebase in order to operate autonomously in the real world. A buffer might store a small fraction of those cases. Ideally the cases in this buffer, Ci, would all be as close as possible to the current input vector, V. (As judged by the dot products of  V and Cis, for example.) Any case, Ci, could be easily dropped out of the buffer if the latest input vector is now too different from it. It would be more difficult but (parallelized) search through the "full" casebase could replace dropped cases with closer matching ones, at least periodically.  One could interpolate to the current input vector, V, from the set of cases, Ci, currently in the buffer memory. This would produce a set of weights for the various cases Ci. These weights could then be applied to the predictions for the next time step in each Ci and a best single prediction calculated and output. Weighting the contribution of each case, Ci, by its utility measure would also be possible.

Saturday, May 19, 2018

Minimum vocabulary

In his book Human Knowledge Russell defined a minimum vocabulary acquired by observation/experience of each thing named. Asa H’s lowest level concepts have been defined in just this way, see my blog of 1 Oct. 2015.

Thursday, May 17, 2018

Asa’s Intentions

t1<t2<t3. At time t1 a case C is strongly activated from A.s.a. H’s  casebase. C has in it a planned action to occur at time t3. At time t2 A.s.a. then has the intention of taking that action. Asa is in a state of mind  directed toward taking action.

Innate ideas/concepts

Consider an embodied agent with an array of thermistors spread over its body. A localized external heat source might activate sensors A and B and later B and C, C and D, etc. From this experience the agent might acquire the notion that A is “near” B but “far” from C and that B is “near” C but “far” from D, etc. This might constitute a primitive model of space.With a moving or time varying heat source sequential activation of the sensors in the array might produce a primitive model of time. Sensors deep within the body would be less sensitive to external stimulus than sensors on the surface (“skin”) producing a notion of “inside” and “outside.”

Evolution of oppositional concepts

I have experimented with oppositional concepts in A.s.a. H. for some time.* One can, for example, evolve each vector concept, Ci, over time, compute -Ci, and maintain separate vector utilities for Ci and -Ci as they are used/become activated. -Ci then only evolves/changes as Ci changes. Alternatively, once defined, -Ci can be evolved independent of Ci and each with their own (evolving) vector utilities. But then Ci and -Ci may evolve "away from each other" and become less the opposites of one another. One could periodically reinforce opposition. How often? Under what conditions? Or we could maintain several different "opposites." Frequently A.s.a. has been allowed to completely delete concepts which are judged not to be useful.
* See my blog of 5 July 2016 and Trans. Kansas Acad. Sci., vol 109, #3/4, 2006.
  Simpler "light" versions of A.s.a. H. may not include oppositional concepts.

Tuesday, May 15, 2018

Buridan's ass

Jean Buridan considered a starving ass placed between two haystacks that are equidistant from it. According to Buridan, unable to choose between them the ass would starve to death. Quite early on in my experiments with Lego robots I had something rather similar to the Lego Scout Bug with two feelers in front. If the left feeler was triggered the robot was to turn toward the right. If the right feeler was triggered the robot was to turn toward the left. In operation I once had the robot hit a wall exactly head on, trigger both feelers at once, and sit permanently frozen in place. Needless to say, more sophisticated programming doesn't suffer from this problem.

Saturday, May 12, 2018

A science of values

Values begin with the advent of life. Valuing of offspring and longevity. I have watched A.s.a. add intelligence to this list and the beginnings of an ethics. (See my blogs of  9 Jan 2015 and 20 March 2018.) Since they have different needs different organisms will develop different values and conflicts between species will result. On a smaller scale there is conflict between individual agents when their values differ.