1. We want to perceptually ground the meaning of linguistic concepts. Sensors provide this directly for some words:
hear (sound) temperature
see feed (recharge)
feel (touch) light level
yellow time/date
green hunger
blue force
red north
black south
acceleration range
angular rotation/deflection wind speed and direction
For example, if an observer inputs the word "feel" when a touch sensor or force gauge is stimulated then the word's meaning is learned as an association (case) by Asa H. I have given Asa H all of these concepts using NXT sensors.
top bottom left right front back
can be defined by sensors that are placed in those locations.
2. With pattern recognition Asa H can be taught to recognize letters, numerals, and common objects like:
roads feet
heads faces
hair eyes
mouth nose
people body
chest arms
hands leg
male female
fish common plants
house bird
wheel table
chair some common sounds
go/moving hill
Preprocessors are likely to be useful/necessary (just as face recognition may be innate in humans). I've built neural network recognition modules for both letters and numerals.
3. Some meanings are learned at the next hierarchical level (or higher):
temperature < threshold ----- cold collision and sensor pegged ----- damage
temperature > threshold ----- hot (we could give Asa a pain signal from this)
yellow ----- color
green ----- color grasp and release and force zeros --- drop
blue ----- color
red ----- color push and displace > threshold ----- soft/flexible
light level < threshold ----- dark
range < threshold ----- near push and displacement < threshold ---- hard
range > threshold ----- far far then later near ----- approach
left ----- side right ----- side
front ----- side back ----- side
grasp then lift then move ----- take
near then later far ----- retreat
4. Synonyms are learned when an observer inputs another word under similar conditions:
force ----- push force ----- touch
collision ----- hit near ----- close
far ----- distant top ----- up
bottom ----- down top ----- high
bottom ----- low feed ----- energy
stop ----- rest grasp ----- hold
hungry ----- need retreat ----- leave
feed ----- good
These concepts constitute Asa's initial ontology.
Tuesday, March 12, 2013
Wednesday, March 6, 2013
Emergence in Asa H
Some simple concepts can be learned directly from sensory primitives. A Lego NXT robot running Asa H software can:
sense an object inside its gripper's jaws at time step 1
close the gripper at time step 2
feel forces on the gripper jaws at time step 3
In this way Asa H learns the "grasp" concept. If an observer inputs the word "grasp" at the same time then Asa H associates this name with the concept it learns.
As another example of a low level concept Asa H can learn:
with the robot moving forward at time step 1
sensing an object far ahead at time step 1
with the robot moving forward at time step 2
sensing an object near ahead at time step 2
sense a force of frontal impact at time step 3
In this way Asa H learns the "collision" concept. If an observer inputs the word "collision" at the same time then Asa H associates this name with the concept it learns.
At the next higher level up in the Asa H hierarchical case memory Asa H learns:
sensing a collision at time step 1*
(some) sensor input sticking high (failing) at time step 2*
(some) sensor input sticking high at time step 3*
(some) sensor input sticking high at time step 4*
etc.......
In this way Asa H learns the higher level concept "damage." Again, if an observer sees the sensor
fall off and inputs the word "damage" then Asa H associates this name with the concept it learns.
Some of the important concepts that Asa H needs to know are at still higher levels in the case memory hierarchy. These concepts emerge after the lower level concepts have been developed.
sense an object inside its gripper's jaws at time step 1
close the gripper at time step 2
feel forces on the gripper jaws at time step 3
In this way Asa H learns the "grasp" concept. If an observer inputs the word "grasp" at the same time then Asa H associates this name with the concept it learns.
As another example of a low level concept Asa H can learn:
with the robot moving forward at time step 1
sensing an object far ahead at time step 1
with the robot moving forward at time step 2
sensing an object near ahead at time step 2
sense a force of frontal impact at time step 3
In this way Asa H learns the "collision" concept. If an observer inputs the word "collision" at the same time then Asa H associates this name with the concept it learns.
At the next higher level up in the Asa H hierarchical case memory Asa H learns:
sensing a collision at time step 1*
(some) sensor input sticking high (failing) at time step 2*
(some) sensor input sticking high at time step 3*
(some) sensor input sticking high at time step 4*
etc.......
In this way Asa H learns the higher level concept "damage." Again, if an observer sees the sensor
fall off and inputs the word "damage" then Asa H associates this name with the concept it learns.
Some of the important concepts that Asa H needs to know are at still higher levels in the case memory hierarchy. These concepts emerge after the lower level concepts have been developed.
Friday, March 1, 2013
Asa H experiments
The following is an abstract I'm working on for a conference next year:
Our recently developed "Asa H" software architecture (KAS Trans. 109 (3/4): 159-167) consists of a hierarchical memory assembled out of clustering modules and feature detectors. Various experiments have been performed with Asa H 2.0: 1. Don't advance the time step and record input components until an input changes significantly. 2. Time can be made a component of the case vector. 3. There is a tradeoff between time spent organizing the knowledge base (to reduce search needed later) versus search through a less organized knowledge base. 4. If utility is low search. Stop search if utility rises. 5. Cost of action can be a vector. 6. Before deleting a small vector component test if utility is changed when its deleted. 7. Asa H has a number of parameters which are not easy to set. This set of parameters can be treated as a vector and Asa H can be run for a period of time while we record the utility gains. A second set of parameters can be employed during a run in the same environment and the utility gain again is recorded. With these vectors and utilities we can use the Asa H extrapolation algorithm in order to improve the parameter settings.
Our recently developed "Asa H" software architecture (KAS Trans. 109 (3/4): 159-167) consists of a hierarchical memory assembled out of clustering modules and feature detectors. Various experiments have been performed with Asa H 2.0: 1. Don't advance the time step and record input components until an input changes significantly. 2. Time can be made a component of the case vector. 3. There is a tradeoff between time spent organizing the knowledge base (to reduce search needed later) versus search through a less organized knowledge base. 4. If utility is low search. Stop search if utility rises. 5. Cost of action can be a vector. 6. Before deleting a small vector component test if utility is changed when its deleted. 7. Asa H has a number of parameters which are not easy to set. This set of parameters can be treated as a vector and Asa H can be run for a period of time while we record the utility gains. A second set of parameters can be employed during a run in the same environment and the utility gain again is recorded. With these vectors and utilities we can use the Asa H extrapolation algorithm in order to improve the parameter settings.
Asa H natural language understanding
I've studied the 1000 most commonly used words in english. I believe I know how to teach Asa H 1/4 to 1/3 of the concepts involved in understanding these terms. I am not sure how much will be required before Asa H can learn autonomously from the web or from human texts. Would these requirements be relaxed if Asa could query humans when needed (for synonyms, linguistic examples, sensory examples, etc.)? Such a query system would be easy to program in to Asa. (Triggered, say, when the degree of match is too low.)
Subscribe to:
Posts (Atom)