Thursday, March 31, 2016

Embodiment and successful human-computer communication

Wittgenstein argued that it was "Shared human means of which we interpret an unknown language." (Philosophical Investigations, 1953, Blackwell, 2001).  If this is true then for an AI and a human to communicate and understand each other they need to share the same sets of behaviors.  This is also what's needed to define the most compact and most primitive set of basic concepts and vocabulary.  Again, with Asa H I am trying to accomplish this with the kind of embodiment described in my blogs of  1 Oct. and 5 Nov. 2015. (Also see a beginning of this in the protolanguage section of chapter 1 of my book Twelve Papers,

A robot's pain

I have placed small aluminum foil tabs on various lego bricks in my lego NXT robot.  When the bricks are properly seated the tabs make good contact.  When two bricks pull sufficiently far apart the tabs lose electrical contact and this signals a pain to the Asa H brain.  This sort of pain sensor allows for the robot's action or an external agent to reseat the bricks and cure the pain. Asa can learn which pains are correlated with any given robot malfunctioning.

Monday, March 28, 2016

Limited vocabulary for Asa H

If there can be a limited (minimal) vocabulary and a small number of primitive terms and concepts (in the sense of Ramsey, Carnap, Lewis, the Canberra plan, and Carnap's aufbau project) then I would want to be sure to ground each of those concepts using the methods I described in my blogs of 1 Oct. 2015 and 5 Nov. 2015.  (The set of primitive concepts may not be unique, of course.) Not all of the concepts need be defined on the same level in the case hierarchy. They may reside in different languages.

Thursday, March 24, 2016

Private language? More experimental philosophy

As my artificial intelligence Asa H learns spatial temporal patterns from the world it collates these observations into concepts of various degrees of abstraction.  I.e., it learns a hierarchically organized vocabulary/language (or series of vocabularies/languages)  with which it then describes/understands the world it lives (acts) in.  If Wittgenstein , Dewey, and Quine are right no private language is possible and it should be possible for me to decode all of Asa's casebases and translate them into some human understandable natural language.  (The translation process might be very difficult, however.)  I have been successful at some of this as reported in my publications and in this blog over the years.  But there have also been portions of Asa's casebase that I have not been able to translate, and then still other bits that I have found that I have gotten wrong.

It is also true that if one starts with 2 identical AIs and train both on identically the same input examples (but presented in different orders) one can develop different concepts (internal vocabularies) in the 2 resulting minds. Various machine learning algorithms do this.

Kelly has suggested conditions under which "minor differences in the order in which they receive the data may lead to different inductive conclusions in the short run.  These distinct conclusions cause a divergence of meaning between the two scientists..." (The Logic of Reliable Inquiry, Oxford U. Press, 1996, Pg 381-382)  And "two logically reliable scientists can stabilize in the limit to theories that appear to each scientist to contradict one another." (Pg. 383) "nothing in what follows presupposes meaning invariance or intertranslatability." (Pg. 384) Perhaps neither could then understand (or translate) the other's private language (concepts/vocabulary/ontology).

Clearly, this is also related to scientific pluralism, the idea of reconceptualizing reality, and the possibility of having alternate realities.

Wednesday, March 23, 2016

Experimental philosophy and AI research

Being an advocate of scientific pluralism I may explore or make use of some viewpoint without expecting it to be universal.  We may come to understand one way of thinking (or one mechanism of thinking) without believing that we understand all of the intricacies ("mechanisms") of thought.

My creativity machine experiments (Trans. Kansas Acad. Sci., vol. 102, pg 32, 1999) rewrote natural language as PROLOG or other code ("logical language" if you will) and then applied logic programming or something similar to deduce conclusions or make postulates.  This can be thought of as a computer implementation of logical positivism.  And, similarly, when my AI Asa H starts with Lego NXT sensor input only and learns more and more complex (and abstract) spatial temporal patterns with which it comes to understand (be able to act successfully in) its world.

Experiments with Asa H can also be thought of as work in experimental philosophy looking into the details of  functionalism.

Thursday, March 17, 2016

The many dimensions of vagueness in Asa H

My artificial intelligence Asa H incorporates vagueness in a number of ways.  Clustering averages multiple spatial temporal observations to form a given concept.  None of the individual observations are an exact match to the concept.  Some similarity measure (or possibly several different similarity measures) compares an observed spatial temporal pattern with a known concept. Generalizations across the hierarchical memory organization are abstractions (vague). Time and spatial dilations constitute yet another source of vagueness. This all has implications for the philosophy of vagueness.

Friday, March 11, 2016

Problem lists

Laboratories, research groups, institutions should keep multiple "problem lists."  One list may contain open questions in your field of research.  In my case this might be questions like what is consciousness or how should we implement attention.  A different list would contain items that we think should be improved upon.  In my case, for example, I am doing certain things to control complexity but I may hope or believe that we might do better.  Or I may be using some algorithm but believe that a better one may be possible.

In today's world most work is done by groups. (I am one of the few remaining lone wolves.) Some  member of the group may have useful ideas about how to solve a problem faced by some other researcher. Several such outstanding problem lists should be maintained and reviewed periodically.

Thursday, March 10, 2016

Doing more with simulators

Robot simulators are faster and more economical than real physical robots.  Any simulation can be thought of as 2 coupled Turing machines, one representing the robot (i.e., Asa H software plus any pre and post processors) and the other representing the environment.  (See my blogs of 7 Jan 2015 and 23 June 2015) I have been recording the environment's response during my embodied robot concept learning experiments, i.e., the work described in the blogs of  1 Oct. 2015 and 5 Nov. 2015. I can now use these recordings as the case base for a case-based reasoner which serves as a virtual robot's environment.

Sunday, March 6, 2016


Years (decades) ago I had a student who was inspired by the original Star Wars movie to get into S.T.E.M.  She wanted to build R2-D2. (I have always preferred the robots from the film Silent Running and, before that, the robots from Asimov's book I, Robot and Jack Williamson's  The Humanoids.) At that time we could not build R2-D2.  Now, however, my LEGO NXT embodied Asa H directed robots can perform all of the functions/behaviors that R2 exhibited in the first film. (I have not seen some of the more recent Star Wars films.) Those functions are:
          Mobile, wheeled but can step slightly (all terrain wheels and suspension)
          Arms with grippers, manipulators, and/or fixturing
          Accepts plug-in memory
          Head rotates
          Interfaces to computers
          Communicates in a robot language
          Reads and stores and replays data
          Fire fighting
          Speech recognition and voice control
A modified version of Blankenship's "Arlo" could do all this for a cost of perhaps $3000.
(But it was never clear to me that any of the Star Wars robots had sufficient capabilities so as to justify the cost of their construction and deployment.)

Elon Musk was similarly inspired to want to build the Millennium Falcon. Some fiction can serve as inspiration.

Thursday, March 3, 2016

Reconceptualizing reality

A number of  the models that Asa H has created are non-Markovian. (Which is quite natural given the data structures Asa H uses. See my blog of 22 Nov. 2010)  This is probably the most profound change that Asa has suggested so far. (See also my blog of 28 Feb. 2011)

Tuesday, March 1, 2016

Asa's subcategorization of its senses

For simplicity sake I have frequently mounted some of Asa H's sensors (fixed) on the PC. Sound sensors and smell (smoke) sensors for example, and sometimes fixed webcams. Other sensors must be carried along on Asa's mobile LEGO robots. Examples would be pain and force sensors and accelerometers and gyros.  I have frequently had a third group of sensors which the mobile robots must grasp and carry to the location where they will be used.  These are things like electric and magnetic field probes, thermometers, GM counters, pH and salinity (taste) probes, etc.  When Asa is embodied in this particular fashion it forms three categories of senses.
Humans do not categorize their senses in this way. (Although one may have to bring his hand near a heat source in order to feel its warmth and will have to bring food or drink to its mouth to taste it.) I can prevent Asa from making these distinctions by mounting all of the various sensors on the mobile robots.  This is more cumbersome for the robotic elements, however.
Again, for some work, like natural language understanding, I would like Asa to understand human concepts as closely as possible.  For other projects, like attempting to reconceptualize reality, I am happy for Asa to form its own unique set of categories.