In chapter 1 of my book Twelve Papers (www.robert-w-jones.com, book) I describe how Asa H was taught words like
and sensory examples of each.
With Lego NXT sensors I believe I can teach Asa H words and concepts (meanings) like
I can also teach letter and numeral recognition, as well as recognition of some common objects in Asa H's environment. But Lego NXT sensors may be too limited, too few, to give Asa the 800 or more examples I think I need in order to begin to deal with natural language. (I do have webcams as well, of course, and Lego has used these too.)
Or, can computer speed and a small number of sensors substitute for the large volume of sensory input humans get?