Monday, September 26, 2016
A small number of the vocabulary words that Asa has learned (see my blog of 5 November 2015) should trigger action (see chapter 1 of my book Twelve Papers, www.robert-w-jones.com, section on learning protolanguage). Words like stop, turn, fast, slow, leave, move, lift, drop, kick, and carry. Asa has been learning a few more like look, walk, run, and jump. But how do we tell Asa when we want it to act and when we don't? With humans, how loud the command is may be the deciding factor. If written, than an exclamation point might be used as the trigger. These could be implemented in Asa, but should they be?
Friday, September 23, 2016
I have presented Asa H robots with progressively more complex activities/experiences in order to grow its hierarchy of mental concepts. (See my blogs of 18 July 2014 and 5 November 2015.) I have also given names to some of Asa's concepts. (See my blog of 11 June 2016.) As I teach Asa to talk and read I again need a curriculum. I need to start with something like a child's early reader, Dick, Jane, and Baby Sally. Should learning to read be conducted concurrent with the learning of the physical concepts, actions, etc.?
Thursday, September 22, 2016
Humans occupy a single contiguous volume. Asa H may control a distributed system of robots that are not contiguous. Asa may then develop a sense of self that differs from what we humans experience. Will Asa find it easier to understand quantum entanglement for instance?
I have assembled a multi-microcontroller architecture (H. W. Lee, MSc thesis, Cornell University, May 2008) operating over the internet using a client/server network. Each client or server program is running in RobotBASIC. (Explained in the book Hardware Interfacing with RobotBASIC, Blankenship and Mishal, 2011, on pages 83-84) The software runs a bit slower than I'd like but the robotic hardware is what dominates overall speed of operation.
Wednesday, September 21, 2016
We say that we want to build "artificial general intelligences" or "universal artificial intelligences." But in the modern world humans are specialists. No one human being could be an expert in all of physics, or all of mathematics, or all of biology. How important is individual "talent?" Can I just train different copies of Asa H on different sets of knowledge and experiences or must some of the algorithms Asa uses be specialized too? Do we need to develop one AI or many? (Like Gardner's multiple intelligences?)
Tuesday, September 20, 2016
Embodiment is not the silver bullet some people would have us believe. It is, however, the easy way to define a number of important concepts. (See my blog of 1 October 2015 for examples.) It is still true, however, that training an AI in a simulator is faster than training in the real world. The biggest problem with simulators is giving them enough channels of sensory input for the AI to have a realistic experience. With Asa H I am trying to use simulators to present less complex sensations and robots to provide others.
I have bought an imp001 development kit (in addition to the adafruit and arduino I already had). I have commented previously that the internet of things may be a good way to give an AI the large number of sensory inputs it needs in order to understand the world.