Thursday, September 29, 2016

Different degrees of self-awareness

Lewis et al define different levels of awareness for self-aware computing systems (Self-aware Computing Systems, Springer, 2016, pages 84-85 and 140-141):


stimulus-awareness: A LEGO robot embodied, solar battery powered Asa H system might measure light intensity and be able to adapt to static environmental conditions. i.e., go sit under a floor lamp.

interaction-awareness: The robot has recorded that by turning toward the light source the light intensity and battery charging increase.

time-awareness: The robot may learn the hours of the day during which light streams in from a window.

goal-awareness: Extrapolation learning attempts to improve Asa's knowledge base and keep the system's batteries charged.

meta-self-awareness: Asa can adjust the proportion of time spent on various of its activities such as exploring, extrapolating, etc. (see, book , chapter 1, section on self monitoring)

Wednesday, September 28, 2016

Self-aware computing again

Agarwal, et al argue (MIT Tech. Report AFRL-RI-RS-TR-2009-161) that a self-aware computer will have five major properties. It will:

1. Be introspective. Be able to observe and improve their own behavior.
2. Be adaptive. Be able to adapt to changing situations.
3. Be self-healing. Be able to correct if and when faults develop in themselves.
4. Be goal-oriented. Attempt to achieve or improve certain specified conditions.
5. Approximate. Perform its functions to within some degree of accuracy.

Asa H has all of these properties.


I am trying to engineer something that is more rational than humans are. Stanovich, et al attempt to define rationality and distinguish it from intelligence in their new book The Rationality Quotient (MIT Press, 2016). I am interested in doing the same thing but I believe that rationality and intelligence need to be described by vectors rather than scalar values.

It's true that attempts to measure intelligence (as IQ) fail to include some of the factors that Stanovich, et al list.  But another part of the distinction between rationality and intelligence comes about due to the attempt to measure each as a scalar quantity. It seems to me to be possible to think in terms of a single VECTOR "rationality/intelligence."

Monday, September 26, 2016

Words that trigger action

A small number of the vocabulary words that Asa has learned (see my blog of 5 November 2015) should trigger action (see chapter 1 of my book Twelve Papers,, section on learning protolanguage).  Words like stop, turn, fast, slow, leave, move, lift, drop, kick, and carry.  Asa has been learning a few more like look, walk, run, and jump. But how do we tell Asa when we want it to act and when we don't?  With humans, how loud the command is may be the deciding factor.  If written, than an exclamation point might be used as the trigger.  These could be implemented in Asa, but should they be?

Friday, September 23, 2016

Dick, Jane, and Baby Sally

I have presented Asa H robots with progressively more complex activities/experiences in order to grow its hierarchy of mental concepts. (See my blogs of 18 July 2014 and 5 November 2015.) I have also given names to some of Asa's concepts. (See my blog of  11 June 2016.) As I teach Asa to talk and read I again need a curriculum.  I need to start with something like a child's early reader, Dick, Jane, and Baby Sally.  Should learning to read be conducted concurrent with the learning of the physical concepts, actions, etc.?

Thursday, September 22, 2016

Reconceptualizing reality and the sense of self

Humans occupy a single contiguous volume.  Asa H may control a distributed system of robots that are not contiguous.  Asa may then develop a sense of self that differs from what we humans experience.  Will Asa find it easier to understand quantum entanglement for instance?

Multi-microcontroller architecture

I have assembled a multi-microcontroller architecture (H. W. Lee, MSc thesis, Cornell University, May 2008) operating over the internet using a client/server network.  Each client or server program is running in RobotBASIC. (Explained in the book Hardware Interfacing with RobotBASIC, Blankenship and Mishal, 2011, on pages 83-84) The software runs a bit slower than I'd like but the robotic hardware is what dominates overall speed of operation.

Wednesday, September 21, 2016


We say that we want to build "artificial general intelligences" or "universal artificial intelligences."  But in the modern world humans are specialists.  No one human being could be an expert in all of physics, or all of mathematics, or all of biology. How important is individual "talent?"  Can I just train different copies of Asa H on different sets of knowledge and experiences or must some of the algorithms Asa uses be specialized too?  Do we need to develop one AI or many? (Like Gardner's multiple intelligences?)

Tuesday, September 20, 2016

Virtual embodiment

Embodiment is not the silver bullet some people would have us believe.  It is, however, the easy way to define a number of important concepts. (See my blog of 1 October 2015 for examples.)  It is still true, however, that training an AI in a simulator is faster than training in the real world.  The biggest problem with simulators is giving them enough channels of sensory input for the AI to have a realistic experience.  With Asa H I am trying to use simulators to present less complex sensations and robots to provide others.

Electric Imp

I have bought an imp001 development kit (in addition to the adafruit and arduino I already had).  I have commented previously that the internet of things may be a good way to give an AI the large number of sensory inputs it needs in order to understand the world.

Thursday, September 15, 2016


Can "nothing" be defined solely in terms of the absence of properties?  E.g., NOT(having mass), ..., NOT(having length), NOT(having width), ... , even, NOT(having duration)?  But NOT(some property) seems, itself, to be a property.  Certainly Asa H handles NOT(category X) in the same way it handles some (category X).  And Boolean logic circuits handle NOT(X) the same way they handle X. If NOT a property IS a property too and if any "something" is just defined by its list of properties then "nothing" is a "something" too.  (See my blog of 20 Feb. 2015)

For any of the concepts that Asa H has learned (see, for example, my blogs of 5 November 2015 and 1 October 2015) NOT(concept) also makes sense and can be used in Asa's reasoning.

Wednesday, September 14, 2016


I am an occasional user of Siri and have bought an amazon Echo Dot in order to make use of their Alexa personal assistant.  In the media Alexa is frequently referred to as being an artificial intelligence. (for example Popular Science, 25 June 2015 and Forbes 14 June 2016) As with Siri I would point out that Alexa lacks the kind of value system that Asa H and some other AIs have. This limits its intelligence.

I approve of amazon's intention to slowly grow Echo/Alexa's capabilities.  This makes much more sense than what some of their competitors are attempting. (e.g., Jibo) The price is also very reasonable.

The home automation apps and hardware would allow you to interface Echo with something more resembling a mobile robot if you really wanted to.

Monday, September 12, 2016

Hierarchical STM

Asa H's short term memory (STM) is distributed across the various levels in Asa's hierarchical memory, unlike the typically monolithic STM that is assumed in most simple cognitive models. (See my blog of  5 March 2015.)

Friday, September 9, 2016

Work on machine consciousness

Hobson decomposes consciousness into 10 functional components which he briefly defines:
( in Scientific Approaches to Consciousness, Cohen and Schooler, Psychology Press, 1996, page 383 )

Attention:                Selection of input data
Perception:              Representation of input data
Memory:                 Retrieval of stored representations
Orientation:            Representation of time, place, and person
Thought:                 Reflection upon representation
Narrative:               Linguistic symbolization of representations
Emotion:                Feelings about representations
Instinct:                  Innate propensities to act
Intention:               Representations of goals
Volition:                Decisions to act

My artificial intelligence Asa H performs all of these functions, some more completely than others.

Attention:               See blogs of 1 June 2011, 21 June 2014, 15 October 2015 for
Perception:             This works well though we would like to have more input sensors.
Memory:                Our case vector memory works well.
Orientation:            Time is represented explicitly. Our self model can represent a person.
                                Asa can recognize where it is by its surroundings.
Thought:                 Extrapolation and other learning algorithms examine and operate on
                               the case memories.
Narrative:               Asa has a simple natural language vocabulary but this is primitive
                               compared to that used by most humans.
Emotion:                Asa has a pain circuit and an advanced value system.  It does not
                               share all of our human emotions.
Instinct:                  Asa can have pain and reflexes, a drive to reproduce, etc.
Intention:               Asa's value system defines its goals.
Volition:                Asa acts so as to optimize its vector utility.

I believe that Asa is more conscious than humans in some ways* and less conscious in others.

* in that it has access to and control over some of its internal processes which humans don't.
Asa also has a much larger STM (short term memory) capacity.

Thursday, September 8, 2016

Scientific pluralism, multiple realities, and teaching

The average student wants to learn about the one correct truth/reality.  When I'm asked any given question multiple, maybe conflicting, lines of thought/argument pop into my consciousness.  Sometimes I can hold back all the detail.  Usually I can not.

Wednesday, September 7, 2016

Meccano again

To make Lego stronger and more rigid they recommend adding more bricks.  You can also use glue but then you can't modify the machine.  Meccano, be it plastic or metal, is held together with screws. This holds the parts together more securely but you can still modify it if you wish to. We can build robots with meccano too.  The pain system would have to be modified of course. (Blog of 31 March 2016)

Tuesday, September 6, 2016

Adafruit microcontroller

Asa H frequently uses multiple microcontrollers in order to control various parts of its robot body.  (See, for example, my blog of 14 December 2015 where Lego NXT brain bricks were used.)  As a possible lower cost substitute I have bought and will evaluate one of the adafruit boards.

The multi-microcontroller architecture makes it easier to add additional functionality over time.  (See, for example, H. W. Lee's MSc thesis from Cornell University, May 2008.)

Finishing up

Again, engineering is a bit more straightforward than science is.  You know you are finished with a project when you have a useful working product that performs the functions you had intended.  (Of course, even in engineering, there is frequently the ongoing maintenance work or the need/desire to incorporate improvements.) But science is less clear cut. Yes, there is the work, finish, publish sequence but even after publication of some work there is usually more that remains to be done.  I tell my students that you declare a project finished and move on to something else when:
1. Funding runs out on that project
2. Time runs out on that work
3. Your employer puts you on another project
4. You are seeing nothing new
5. You find something else that you could better spend your time on.

Thursday, September 1, 2016

The concepts of "best" and "better"

I have argued that with vector utility/value there is no such thing as "the best college." (See chapter 2 of my book Twelve Papers, at under "book".) Similarly, it may be that there is no such thing as "the best of all possible worlds."
 But world A might be "better than" world B. Suppose the vector value of worlds had only 2 incommensurable components (x,y) and that there were 3 possible worlds with:  W1=(1,2), W2=(2,1), and W3=(3,1).  Then W3 is better than W2.  They are equally good according to component y and W3 is better than W2 according to component x.  But we can not judge which of the 3 worlds is the best of all.  W2 and W3 are better than W1 according to component x but W1 is better than W2 and W3 according to component y. If some one world had the highest value for ALL of the components (x,y,z,....) only then does a "best of all possible worlds" exist.