Monday, March 30, 2015

Executive control in artificial intelligences

Some researchers believe that a general executive function for an AI requires a "hardware solution," a specialized module as a part of  the AI's cognitive architecture. (see the work of Dario D. Salvucci, for example)  One argument in favor of this view is the belief that procedural knowledge and control processes are handled in different segments of the human brain, basal ganglia versus dorsolateral prefrontal cortex.

As a simple example of executive control, executive processes might pick the most activated schema and cause it to take control of cognition. Things get more difficult in situations where multitasking is required.

Other researchers believe that a general executive function might result from simply making task goals another element in working memory.  In rule based systems, for example, one set of rules can control other rules by modifying the goals that are currently active in memory.  (see the work of David Kieras for example) One argument in support of this view is the fact that modern computer operating system development  typically minimizes the use of "hardware fixes" in dealing with control issues. (things like interrupt hardware)

This is all related to the question of what consciousness is and how it works, both in humans and in machines.

Saturday, March 28, 2015

Another way in which technology has changed my life

I now look up all manner of things on the web with my smartphone.  Things I wouldn't have even known how to find in a library or in reference books in years gone by.

Wednesday, March 25, 2015

Some forms of cooperation or coordination in multi-agent systems

I have done some work with societies of Asa H agents and am interested in how cooperation can occur in such systems.  I have experimented with the following:

1. Trading and combining of casebases/knowledgebases between agents (both within a single generation and from one generation of agents to the next).

2. Deferring tasks to specialist agents or the active assigning of tasks to specialist agents (through the action of an administrative agent).

3. Localized action  assigned to agents according to their distribution across (in) space.

4. Emergent cooperation.

5. Agents organized into blackboard systems.

6. Communicative coordination between Asa H and humans. (on my website see chapter 1 of my book Twelve Papers)

I have not done any work with communicative coordination going on between the AIs themselves, nor with  agents negotiating or contracting with each other.


Monday, March 23, 2015

Curriculum for an AI and knowledge organization

It is important in what order you teach things to an intelligence.  The "ideal" curriculum may be different for AIs versus that for humans.  In the case of humans (and a few AI systems) some results appear in: In Order to Learn, F. E. Ritter, et al, editors, Oxford Univ. Press, 2007. In the case of an AI  I have described some of what I've taught Asa H in my various publications and in this blog.

 As an example, with both AIs and humans one should teach letters first, then words, then phrases, then composition.  In general, start with small items to learn.  Progress toward larger items.  If the elements of the topic being taught are interrelated teach the individual elements first, then teach the associations between the elements.

Constrain early learning more.  Relax the constraints as learning proceeds.

On the other hand, with a multiagent AI we might sometimes wish to train different agents on the same patterns but presented in a different order. This can force the different agents to form different mental models/categories and enhance mental diversity.

Humans have an issue with the splitting of attention but AIs will typically have more STM than a human does and so this is less of a problem.  AIs can potentially do more in parallel.

Some knowledge organization/partitioning/clustering/sorting can be built directly into an AI's memory and can follow standard library techniques and practice.(for example see Dewey decimal classification and relative index, Forest Press, 1971 and Theory of classification, K. Kumar, Vikas pub., 2004) The knowledge stored in one given hard drive might be what would have otherwise been found on a given stack in a library of print books. In Asa H, for instance, sufficiently similar case vectors are clustered into a given casebase (one of many casebases which Asa then uses).

Advanced training for a society of  AIs might possibly resemble, be organized according to, and be modeled after the  training of humans in their various common career tracks.

Issues such as these become important as the size of the knowledgebase grows and the knowledge becomes more diverse.  It must also be possible to change the knowledge and its organization for reasons like those described by Arbesman in: The Half-life of Facts: Why everything we know has an expiration date, Penguin, 2012.

Open systems

Marty Solomon, an old friend of mine from college and grad school days, has argued that humans can do things that formal systems (Turing machines, digital computers) can't because humans are open systems, systems which interact with the world using a (rich, wide bandwidth) array of sensors. (Brit. J. Phil. Sci., 45, 1994, pg 549) He suggests that a computer augmented by "...sensory inputs from sophisticated input facilities..." might also qualify as such an open system.  (The simple sensors possible with the Lego NXT robots clearly might not be enough.)

I hold rather similar views but might emphasize outputs as well as inputs. Asa H, for example,  observes spatial-temporal patterns in the world, decomposes them, changes them, and assembles new patterns, some of which have never been seen in the real world.  Asa performs some of these patterns during the course of its daily activities and evaluates their usefulness.  It experiments. It injects new structure into the world. (For reasons I've explained before,however, I do not think this implies that all AIs have to be embodied in order to be intelligent and act intelligently.)

Friday, March 13, 2015

An americanized version of the Parom tug

With Jupiter and Exoliner Lockheed-Martin is proposing an american version of the russian Parom orbital tug and cargo container system. The launch vehicle is the Atlas V which uses russian engines.  If only Lockheed-Martin could cooperate with the russians on the tug as well. Both sides would benefit.

Wednesday, March 11, 2015

Design patterns for AI

Would the use of design patterns like: state, adapter, composite, etc. aid in and speed the construction of high quality AI software?  Or, are we still at the stage of discovering/inventing the patterns that we will need?

Thursday, March 5, 2015

cognitive architectures

I believe that building more intelligent software requires us to explore the space of cognitive architectures.  To do that I have running Asa H, ACT-R, SOAR, Chrest/EPAM, subsumption architecture, various ANNs (deep learning MLPs, ART,  etc.), etc.

STM and LTM in Asa H

Each of the layers of the Asa H hierarchy has short term and long term memories.  The short term memory stores the few recently active input patterns while the long term memory stores the concepts/patterns which that layer has learned to identify and use. But globally it is possible that LTM in lower layers of the hierarchy may actually change faster then STM of some higher layer in the hierarchy.

Wednesday, March 4, 2015

Asa H's concept of its self

As Asa H creates/grows a hierarchical model of its interaction with its environment it forms a model of itself in that environment.  Here is a fragment of a concept hierarchy grown by Asa H showing the model it has created of itself. (I am the one who has named the various nodes but Asa H can be taught the names too. see chapter 1 of my book Twelve Papers)  As Asa H continues to act in its world this self concept becomes activated periodically.

Monday, March 2, 2015

The MicroPsi cognitive architecture

I am studying  Bach's  MicroPsi cognitive architecture (Principles of synthetic intelligence, Oxford Univ. Press, 2009) and have downloaded a copy of the MicroPsi 2 software. MicroPsi has a heuristic value system that is modeled after that of humans.  It is not based on a simple scalar utility, rather, it employs a set of drives and aversions that attempt to measure and respond to things like:

health
damage
competence
uncertainty
affiliation and external legitimacy
fulfillment of social norms

I have criticized the human value system. I believe that a system like MicroPsi's may make the AI more human but less rational.  I don't think the values MicroPsi uses are the most fundamental values.  (as defined by evolutionary biology, for example)
Like my Asa H MicroPsi "encodes semantic relationships in a hierarchical spreading activation network.  The representations are grounded in sensors and actuators and are acquired by autonomous exploration."