Monday, July 28, 2014
Tensors in artificial intelligence
The nervous system acts as a state machine, taking an input vector at time t1 and a state vector (memory) at time t1 and generating an output vector and state vector at time t2. Since the input vector and state vector at t1 are typically not parallel to the output vector and state vector at t2 one is led to consider tensors in order to perform the calculations of output and state at t2 from the input and state at t1. There has been a limited amount of work along these lines; the papers by Pellionisz and Llinas (for example, Neuroscience, vol. 16, pg 245, 1985) and the PhD thesis of C. P. Dolan, UCLA, 1989.
Asa H upper ontologies
The upper 5 or so layers of the Asa H hierarchy (see my blog of 4 May 2013) typically includes 50 or more concepts in common with the Generalized Upper Model 2.0 (J. A. Bateman, R. Henschel, and F. Rinaldi, 1995 ) including:
UM-thing UM-relation
configuration attributes
happening circumstances
sensing ordering
positioning spatial order
naming facing
motion behind
need
property
element
quality
color
process
location
There is a weaker relationship with the CYC upper ontology (D. Lenat, et al) including concepts like:
thing
individual
object
situation
event
attribute
UM-thing UM-relation
configuration attributes
happening circumstances
sensing ordering
positioning spatial order
naming facing
motion behind
need
property
element
quality
color
process
location
There is a weaker relationship with the CYC upper ontology (D. Lenat, et al) including concepts like:
thing
individual
object
situation
event
attribute
Friday, July 18, 2014
Curriculum for Asa H 2.0
What subjects should a machine learner be taught before releasing it into the wild? And in what order should they be taught? My current best estimate has been something like:
1. features
2. shapes
3. concrete objects
4. actions
5. alphabet and numerals
6. words and naming
7. counting
8. language/reading
9. abstract objects
1. features
2. shapes
3. concrete objects
4. actions
5. alphabet and numerals
6. words and naming
7. counting
8. language/reading
9. abstract objects
Thursday, July 17, 2014
Why is there something rather than nothing?
It may be that a vacuum is unstable, much like the expansion of a de Sitter space in general relativity and the creation of particle-antiparticle pairs out of the vacuum in quantum mechanics. (But not to expect that CURRENT physics tells the whole story/truth.)
Issues with sensor upgrades and Asa
In nature, the brain and intelligence coevolve with the senses and effectors. In humans the visual cortex is a substantial part of the brain. Asa H has been connected to simple LEGO NXT sensors as well as simple visual inputs (see earlier blogs like 12 March 2013, 16 Feb. 2013, 14 Feb. 2013, 13 June 2013). Concepts/semantics grounded in terms of these simple sensory signaling devices may be lost or distorted if/as we try to upgrade to richer sensory systems.
In humans, some limited reorganization occurs in the brain when sensory input changes (say after loss of an eye or a hand or, conversely, if a child is given reading glasses). In Asa H some relearning also occurs. But if large scale improvements are made in, say, Asa's vision system will the previously learned mental concepts be useful? Or should/must we start learning from scratch with the new sensors in place? Meaning can be very sensitive to the data stream that has been seen (see, for example, pages 381-382 of Kelly's book The Logic of Reliable Inquiry, Oxford, 1996).
In humans, some limited reorganization occurs in the brain when sensory input changes (say after loss of an eye or a hand or, conversely, if a child is given reading glasses). In Asa H some relearning also occurs. But if large scale improvements are made in, say, Asa's vision system will the previously learned mental concepts be useful? Or should/must we start learning from scratch with the new sensors in place? Meaning can be very sensitive to the data stream that has been seen (see, for example, pages 381-382 of Kelly's book The Logic of Reliable Inquiry, Oxford, 1996).
Thursday, July 10, 2014
Rationality
Perfect rationality is impossible (see, for example, Predictably Rational, R. B. McKenzie, Springer, 2010). My work with Asa H is aimed at producing a mind which is more rational than humans are.
Looking for change
We have experimented with an Asa H in which we do not advance the time step and record input components until an input "changes significantly." (R. Jones, Trans. Kansas Academy Sci., vol. 117, pg 126, 2014) This can be done by storing and updating a running average of the input (a single component of the input vector OR the input similarity measure, a dot product for example) and a running average of the standard deviation (of the single component OR the similarity measure).
An average over time is involved so we can employ multiple copies of this algorithm, each looking over time windows (intervals) of different length.
An average over time is involved so we can employ multiple copies of this algorithm, each looking over time windows (intervals) of different length.
Multiple similarity measures
We advocate scientific pluralism for modeling reality. (R. Jones, Trans. Kansas Academy of Sci., vol. 116, pg 78, 2013) Similarly, in Asa H we can simultaneously employ multiple similarity measures (either in a single agent or spread through a society of agents) each tracking its own best match in the (single or multiple) case base(s) employed and generating a best preferred action sequence.
Subscribe to:
Posts (Atom)