Monday, December 14, 2015

Hardware architecture for concept learning

One of the hardware architectures used to learn the concepts/vocabulary in my blogs of 1 Oct. and 5 Nov. 2015:

Saturday, December 5, 2015

Concept of self

After my AI Asa H developed the 4 levels of concepts as described in my blogs of 1 Oct. and 5 Nov. 2015 I presented the robot with additional activities and a 5th level develops which includes a self concept similar (but not identical) to the one reported in my blog of 4 March 2015.

Tuesday, December 1, 2015

The AI curriculum again

I have stressed the importance of curriculum when training an AI. The order in which experiences are presented influences the concepts that the AI forms. ("The order that presented...can strongly influence what is learned...sometimes even whether the material is learned at all." pg. ix, In Order to Learn, Ritter, et al, Eds., Oxford, 2007) Cliff Goddard has studied the order in which Wierzbicka's universal semantic concepts are learned by human children (in Applied cognitive linguistics 1, Putz, Niemeier, and Dirven, Eds, Mouton de Gruyter, 2001). I thought this might be relevant for my work on AI curricula (see my blog of 18 July 2014 for example). Some of his results make sense to me.  The concept of "seeing" would be learned early and the concept of "dead/die" would be learned later (at age 3-4 years Goddard says). But some of his results I don't understand.  Why would "hear" be learned so late, for example (at 2.5-3 years of age).  Is it just that we need to be distinguishing between when a concept is acquired and when a word is used to name that concept? 

Sunday, November 29, 2015

Multilingualism as scientific pluralism

Each language attempts to describe/model the world.  I have argued previously that having several models is better than having just one.  (See, for example, my blog of  17 Aug. 2012)  So Anna Wierzbicka may be right in arguing that we should be multilingual. (See, for example, her book Imprisoned in english, Oxford U. Press, 2013) I am, of course, including mathematics and computer codes in the mix of languages.

Tuesday, November 24, 2015

Should robots have their own language?

Rather than trying to teach them English or some other human natural language
Mubin et. al. have suggested that it might be best if robots had their own spoken language (What you say is not what you get, Spoken dialog and human-robot interaction workshop, IEEE, Japan, 2009).  My AI Asa H has just such a language as defined in my blogs of 1 Oct. and 5 Nov. 2015 and which
covers Wierzbicka's set of universal semantic primitives (see my blog of 24 May 2013).

Thursday, November 19, 2015

Alternate concepts, alternate realities

In some of my work with my AI Asa H I have sought alternate concepts with which to model reality.
See, for example, my blog of  22 April 2013.  Perhaps one way to promote the formation of such alternate models of reality is to give Asa H senses which humans don't have, things like radiation detectors, sonar, etc.  Years ago Eddington presented his "two tables" paradox.  He noted that we have a concept like "table", something that is continuous, colored, and solid when sensed with human fingers and eyes.  But he argued that this same object would be mostly colorless empty space when observed via the scattering of, say, an electron beam.

Wednesday, November 18, 2015

A concept of height

Touch sensors on the robot's head versus on the robot's base can define a difference in height. A Vernier barometer raised and lowered by as little as a few feet can detect and define a height change.  A hill can be defined by the pattern of altitude change as a robot climbs and descends it.  A gyro sensor can detect the accompanying changes in inclination of the robot ("pitching"). gps sensors can also give altitude information but they are much less sensitive.

Wednesday, November 11, 2015

Asa H vocabulary growth

After the work described in my blogs of 1 Oct. and 5 Nov. 2015 the next most logical step might be to try to teach Asa H the 1000-3000 most commonly used words in english.  Besides making it easier for Asa to communicate with humans, and learn from humans, a larger vocabulary means you know more concepts and can make finer distinctions between the patterns you observe.

I have never been good at languages.  I may not be the best person to do this work.

When presented with unrestricted real world input Asa has always learned some concepts that I have been unable to name (i.e.,attach human labels to).  Humans may also have such unnamed concepts in their heads.  These could be what is active when we have a hunch or experience intuition. Could this account for some "psychic" phenomena?

Thursday, November 5, 2015

Studying the concepts that Asa H learns

My artificial intelligence Asa H can be presented with quite complex spatial-temporal input patterns and then learns a hierarchical representation which is many layers deep. (i.e., deep learning) In that case even if I watch the activation that is transmitted up the levels of the hierarchy I typically can not name/identify all of the concepts that are being formed/taught. 
I am now trying to present a more organized curriculum for Asa H to learn from.  I want to be able to identify as many of the concepts Asa learns as possible. This should also help us to teach Asa human language.

Using the methods described previously (see for example my blog of  1 Oct. 2015) I have given "level 1" of Asa H the concepts:

far, near, hit/strike front, hit/strike back, hit/strike left. hit/strike right, hit/strike top, hit/strike bottom, touch hand/gripper, say, time, taste, smell/smoke, light, arm left, arm right, arm up, arm down, hand/gripper open, hand/gripper close, rotate gripper cw, rotate gripper ccw, location, temperature, black, red, green, blue, yellow, orange, purple,  eye, food/energy/charge/voltage, eat/current, ground/floor, wall, hear/sound, wind/air current, bump, rotate/turn body left, rotate/turn body right, magnetism,  pain/breakage, mouth contact, move body forward, move body backward, age, line, square, circle, triangle, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, A, B, C, D, E, F, G, H, I, J, K, L, M, N, O, P, Q, R, S, T, U, V, W, X, Y, Z, sense dock.

By presenting the robot with simple (real or computer simulated) activities I have given level 2 of Asa H the concepts:

letter, number, shape, hot, cold, see, surface, collision, north, south, east, west, side, old, young, piece, inside, color, arrive, leave, dark, wait/stay, dead servo, grasp, hard, soft, stop/end, drop, need, turn/rotate, fast, slow, name, path, kick, home, left, right, front, back, top, bottom, body, hunger, control, arm.

By presenting the robot with more (and more complex) activities I have given level 3 of Asa H the concepts:

sense/feel, direction, room, damage, tool, take, move, lift, dock.

Level 4 of Asa H has acquired the concepts:

health, carry

Names can be associated with each of these concepts in their respective case-bases.
It should be noted that a given concept may not always be learned in the same level of the hierarchy (see my blog of  3 June 2013). Rather, this depends upon the senses available to the robot, the activities it has experienced, and their order.

Wednesday, November 4, 2015

Self knowledge in Asa H

A robot embodied Asa H has inputs from its physical senses.  On the lowest level in the concept/memory hierarchy Asa feels things like its level of battery charge, temperature, pain/damage, sight, sound, touch, acceleration, etc.
Asa can also have access to internal/software features. On the various levels in the concept hierarchy it can accept as input things like the size of the current casebase, the current learning rate ("L"), how often it is attempting case extrapolation ("skip"), etc.  (see my blog of  10 Feb. 2011 for an example of "L" and "skip" )  We can also measure,  record, and input the time spent in any of Asa's algorithmic processes. Asa can then learn to adjust/optimize any of these quantities. (See my book Twelve Papers, chapter 1, page 15, self monitoring) In this way Asa can sense its own thought processes. Is this the nature of qualia?

Friday, October 30, 2015


I now have the University of Waterloo's Nengo spiking neural network software package running in my computer lab.

Giving Asa H names for the concepts it learns

It is easy to associate a word/name with the lowest level concepts that Asa (or a human infant) learns, things like:

collision=(sense near, bump, accelerate, hear sound "collision")

approach=(sense far, move forward, sense near, hear sound "approach")

When running Asa H we frequently watch the signals being sent from one level of the memory hierarchy to the next (see my blog of 26 Aug. 2013) which makes it possible to supply names for the higher level concepts as they become activated.  This is not possible for humans of course. Humans probably do not share exactly the same higher level (i.e. more abstract) concepts.

Thursday, October 29, 2015

Does AI really need biological plausibility?

Eliasmith has criticized Markham's recent Blue Brain paper (Cell, 163, pg 1, 2015) saying "But you can get all those results with a way less complicated model."  But if your interest is in modeling general intelligence then I fear that I might say the same thing about Eliasmith's own model of the brain, Spaun. (Science, 338, 30 Nov. 2012) The very limited short term memory that humans have is something I certainly don't want to duplicate in any AI.

Wednesday, October 28, 2015

Numerics of promotion and tenure

If you publish a paper with a coauthor sometimes you should receive half the credit because you may have done only half as much work. On the other hand if you publish the paper with Einstein sometimes you should receive more credit (than for a single author paper) since Einstein's name suggests the paper is more valuable. Even if the coauthor is just Joe Physicist it may be important that someone, anyone, agrees with your ideas. Each paper really must be evaluated on its own merit. No fixed weighting scale is as good.

New work on focus of attention in Asa H

Each level in the Asa H hierarchy learns a set of cases and the components/features that make up the case. Each case is a vector and the features are the vector's components. In my blog of 7 Oct. 2015 I noted that we can use things like statistical measures of independence to prune features.  We can also use standard statistical measures in order to determine the relative importance of each feature in identifying a case it is found in.  Such statistical measures can then be used as weights in the (dot product or other) similarity measure that Asa uses when it compares cases.

(In other case-based reasoning systems it is not real common to see the dot product used as the similarity measure.  I have tried other similarity measures in Asa H but coming out of a physics background I have probably shown a bias toward the dot product.  It is worth noting that Jannach et. al in Recommender Systems, Cambridge Univ. Press, 2011 on page 19 say "In item-based recommendation approaches cosine similarity is established as the standard metric as it has been shown that it produces the most accurate results.")

Sunday, October 25, 2015

Administrative work

I was head of physics at ESU for 4 years (and associate chair of the division of physical sciences for a part of that).  I found that I did not like administrative work. Things like scheduling, organizing, report writing, staff meetings, personnel decisions, office politics, etc.  I found this was all just a distraction from S.T.E.M. work.

Saturday, October 24, 2015

Asa H

Asa informs me that indexing (which helps to organize knowledge and speed up search) is a kind of learning. This has prompted me to order a book on automatic indexing.

Friday, October 23, 2015

Chaining and creativity in Asa H

Asa H incorporates a number of learning mechanisms including "rule" chaining.  At the lower level in the memory hierarchy Asa has chained together things like :

sense dock to right --> turn right --> sense dock forward --> move forward --> dock --> recharge --> charged --> back away

(see chapter 1 of my book Twelve Papers at , book)

At higher levels in the memory hierarchy Asa chains together things like:

a production system --> a universal machine --> Turing equivalent --> intelligent

This is very similar to what occurs in the "creativity machines" that I studied some years back. (see Kans. Acad. Sci. Trans., vol. 102, pg 32, 1999)  Asa's reasoning is more sophisticated, however, in that it may involve concepts of different levels of abstraction simultaneously.                                       

Tuesday, October 20, 2015

Search in Asa H

Search has always been an important component of AI.  When a sufficiently large spatial/temporal pattern is presented to  A.s.a.  H.  it's hierarchical memory is searched over multiple levels of abstraction.  Any retrieved matching memory will then involve concepts defined across these various levels of the hierarchy.

Monday, October 19, 2015

Is the NOT concept innate?

The NOT operation can be made innate in Asa H, i.e., 1 - dot(In,Ini).  (See my paper in Trans. Kansas Acad. Sci., vol. 109, #3/4, pg 161, 2006.) A similar thing is the inhibitory neurotransmissions found in the human brain.

Building a fifth generation computing system

A cognitive architecture that makes natural use of parallel processing (including the web/cloud?) would be an ideal fifth generation system.  A.s.a. H. (and A.v.a.) is my candidate fifth generation architecture and has run as a parallel computer.

Thursday, October 15, 2015


I am experimenting with additional ways to try to restrict Asa H's focus of attention.

1. Input of natural language along with sensory stimuli.  Possibly weighting words more.
2. Of all the N inputs at time t only accept the M largest.
3. Of all the N inputs at time t use a window/spotlight to attend to only M of them.  Do not move the spotlight if similarity match or utility are high enough.

Wednesday, October 14, 2015

Mind as mathematician

My AI Asa H discovers, records, and manipulates spatial and temporal patterns. It uses these patterns as a language with which to describe the world.

Mathematics is a theory (or set of theories) of patterns and is used as a language (or set of languages). So one could think of a mind as a mathematician.

Tuesday, October 13, 2015

Quantum mechanics as dualism: Hypermind?

In a quantum computer the processing (thinking) takes place either in computers in Everett's many worlds or else in the many dimensional Hilbert space.  (Depending upon your interpretation of quantum mechanics.) If our brains were quantum computers then there just might  be a world of mind which is distinct from the physical world that our bodies occupy. (The ordinary 4 space.) This is much like the spirit-body dualism of Descartes and others.

My own view is that thought and mind are classical phenomena like those described on my website:, philosopher, theory of thought and mind.

It might be interesting to run something like my AI Asa H on a quantum computer, of course.  Might this produce a hypermind in its own universe?

Wednesday, October 7, 2015

Feature pruning the Asa H case-base

Each level in the Asa H hierarchical memory is composed out of concepts each one of which consists of features defined on the next lower level in the hierarchy.  (The concepts can be thought of as vectors and the features are the vector's components.)

In our publications on Asa H we have described ways in which we may prune some of these features.  Standard statistical measures of independence can also be used to prune features; mutual information measures, Fisher's discrimination index, the chi-square test of independence, etc.

Thursday, October 1, 2015

Enabling conversations with Asa H

I have given my AI  Asa H a kind of minimalist set of concepts based (mostly) on the Toki Pona artificial language:

"need" or "want" is defined by low robot battery and need to recharge
"away", "long" (distance), or "far" is defined by a Lego NXT ultrasonic sensor reading which is approaching 255
"near" is defined by a small input reading from an ultrasonic sensor
"strength" or "force" or "push" is defined by input from HiTechnic force sensors
"white" is defined by an input reading from a HiTechnic color sensor approaching 17
"grasp" is defined as a gripper servo closing and feeling an object with force or touch sensors
"drop" is defined as opening a gripper that had been grasping an object
"strike", or "hit" is defined by inputs from force and Lego NXT contact sensors
"home" is defined by a robot's docking station and recharger
"say" is defined by robot sound or other signal transmission
any "time" is defined by reference to the computer's clock (or an external time reference)
"move" is defined by input from encoders in a robot's drive servos and by any HiTechnic motion sensors
"taste" is defined by inputs from Vernier pH and salinity sensors
"light" or "bright" is defined by input from a Lego NXT light sensor
"knowledge" is defined by input (and output) of a computer file/case-base
"front", "back", "left", "right", "side", "top"/"on", "bottom", "body", "head", "hand", "arm", etc. are all defined by force or touch sensors on those various sides/parts of a Lego NXT robot.
Any "location" is defined by input from a Dexter industries gps module
"hot", "cold", and "temperature" are defined by input from a Vernier temperature sensor
"end" and "stop" are learned as the cessation of  some servo actions
"black" and "dark" are defined by a low input from a HiTechnic color sensor or light sensor
"work" or "active" is defined by motor activity continuing over time
"eye" is defined by the inputs from a webcam
"leg" or "foot" are defined by signals to and from the appropriate servos
"word" or "name" are defined by the set of categories and names learned for them
"path" or "road" can be defined by a line following system
"food" can be defined by the measured amount of energy stored in the robot's batteries
"eat" is defined by sensing  battery recharging
"earth" or "ground" or "floor" can be defined by setting down force or contact sensors
"wall" is defined by lateral touch or force sensor contacts and gps readings
"see" is defined by inputs from a webcam, light, or color sensors
"red", "green", and "blue" (or other colors) are given as inputs from a HiTechnic color sensor
"hear" and "sound" are defined by inputs from a Lego NXT sound sensor
"color" is defined by an input from the color sensor which is neither too high nor too low
"wind" and "air" or "fluid" are defined by the input from a Vernier anemometer
"wait" or "stay" is defined by prolonged lack of servo operation and fixed gps reading
"bump" or "acceleration" is defined by input from a HiTechnic acceleration sensor
"rotation" is defined by input from a HiTechnic gyro sensor
"north", "south", "east", "west", and "magnetism" are defined by input from HiTechnic compasses and magnetic field sensors
"turn" is defined by input from the gyro sensor, compass, and servos
"fast" and "slow" are defined by the level of inputs from various servos
"hunger" is defined by a low battery charge measurement
"pain" and "breakage" are defined by input from fine damage detecting (breakage) wires
"mouth" can be defined by the robot's battery charging contacts
"piece" can be defined by the components of a robot ("body", "arm", "gripper"/"hand", etc.)
for a virus AI like Ava 1.0 "reproduction" can be defined by disk or file copying
"parent" can be defined as the source copy when file copying occurs
"child" can be defined as the file copy
"dead servo" can be defined by zero current and zero motion when the servo is commanded
We can also detect when certain sensors are "dead".
"dead robot" can be defined by seeing when all or many servos and sensors are dead and/or many "pain" signals are input
"sense" is defined by input from any of the robot's sensors
"surface" is defined as a "wall" or "floor"
"control" can be defined by the activation/use of a PID postprocessor
"age", the robot keeps track of how long it's been in operation
"young" or "new" can be defined as an "age" less than some given number
"old" can be defined as an "age" greater than some given number
"inside" can be defined by gps values falling within a certain range
Asa H can make use of a NOT or inverse (see my paper in Trans. Kansas Acad. Sci. 109 (3/4), pg 161, 2006)  and then "live" can be defined as NOT "dead". (You can elect to define the inverse of just a limited number of signals.)
"room" or "container" is defined by "floor" and "walls"
"hard" is defined by strong "push" and small displacement
"soft" is defined by small "push" and larger displacement
"take" is learned as a sequence "grasp"-"lift"-"move"
"tool" is learned as sequences like "push"-"object"-"push"-X
"collision" is learned as a sequence "near"-"strike"-"accelerate"
"damage" is learned by a sensor pegging and/or in terms of breakage detectors ("pain")
level of "health" is learned as a combination of level of  "damage" and "hunger"
"leave" is defined by a "near" proximity measurement followed by a "far" measurement
"approach" or "arrive" is defined by a "far" proximity measurement followed by a "near" measurement
"feel" is defined by input from any force or touch sensor
"good" and "bad" are defined by the degree of activation of the  "health"  concept.
Neural network preprocessors have been trained to identify various "letters", "numbers", "shapes",  common objects  etc. "similar" and "different" can be defined by the degree of match reported by such preprocessors.
Algorithms are available to detect "faces" and "people" in images and to count them.
"group" or "many" or "large" can be defined as when a count exceeds some given number.
"lone" is a single person or face.

In some cases we would like to have additional definitions for a given concept.

Looking back on this work I think that up until now I may have underestimated the value of embodiment.

 As they are listed above, the english words for each of these concepts can be learned by association with each of the relevant input hardware signals seen by the robot. We can then hope to converse with Asa in this elementary robot language.  This is an extension of the simple communications reported in chapter 1 of my book Twelve Papers (see my website  under Book).

Tuesday, September 29, 2015

Pain sensors for Lego NXT robots

Fine wires can be strung between Lego NXT robot components (e.g., head, body, arms, etc.) or even between each individual brick.  When the pieces separate (break) the wires break and this provides a pain signal to the Lego brain brick or Asa H mind software.

Saturday, September 26, 2015

Open house

Today is the Emporia State "family day" open house. The state of Kansas wants us to have 10 physics majors per year so there is a lot of emphasis on recruiting.  For me at least physics, and science in general, is about ideas.  These public events tend instead to be more like carnivals. By and large my colleagues do not seem to share my values. They want to say physics is fun.  I want to say physics is important.

Some things are easy to learn.  But learning is, in general, NP hard.  You can't just learn the easy stuff.  You can't expect it to always be fun. University should not be considered a part of the entertainment industry.

Maybe the english department would do things more to my liking.

Monday, September 21, 2015

Ignoring input

I might learn nothing from reading a book written in Italian.  A specialist expert may learn nothing from a book written about some other specialty.

Asa H can be given specialist knowledge by loading a casebase learned previously. It may be that Asa should ignore input which is outside of its area of specialization. (To do otherwise may just fill the memory with useless information and slow down Asa's cognitive processing.)  This can be accomplished by having Asa ignore input when the similarity measure is sufficiently low. Whole data files can be detected and skipped in this way.

Sunday, September 20, 2015

Unassimilated memories, library use

There are many things which I haven't committed to memory but which I know where/how to find, the mass of a uranium atom for instance or my brother-in-law's telephone number.  I want to give my AI Asa H the ability to use an external electronic library.  Such an auxiliary memory would then expand Asa's knowledge without slowing down its regular search.
What we want is more than a robot librarian.  We want to retrieve individual facts, not whole books. I am looking at the universal decimal classification system for one thing.

Monday, September 14, 2015

Why no larger space program?

If the Russians had put a man on the moon first.  If venera had found life on venus. If Viking had found life on mars. If the space shuttle had reduced launch costs by more than an order of magnitude. If some economically important space processing had been developed on the ISS.
If seti had heard intelligent signals from space. If any of these had happened then we might be sending humans to mars today. But they haven't happened, at least not yet.

Saturday, September 12, 2015


Intelligence requires the definition and use of values.  (See Anderson's ACT-R model or my Asa H) During the course of doing science an intelligence promotes and develops a number of values.  (Things like: the importance of evidence, testability, consistency, skepticism, logic, etc.) I have watched a simple value system as it develops in an artificial mind using my Asa H software. (See my blog of  25 Sept. 2013 for instance.) The values that we develop while doing science are then applied to the other areas of our lives, religion and politics for example. This helps us to create a single unified world view.

Friday, September 11, 2015

Sensitivity analysis

Sensitivity analysis is used in Asa H and other AIs for/during things like feature discovery, generalization, forgetting, abstraction, etc. How sensitively utility depends upon a given input or group of inputs is one important measure.  In many tasks and environments that Asa H has explored there are far more input variables than output variables (actions).  For that reason sensitivity of output variables to variations in input variables and clusters of input variables (features) is also a valuable measure.

Thursday, September 10, 2015

The ideal?

On the move I could work on documents on my smartphone.  (With a USB attachable physical keyboard sometimes.)  When I arrive at home or at the office I plug the phone into a dock which, in turn, charges my phone, may provide additional processing power, physical keyboard, monitors, external bulk storage, printer, scanner, etc.  Files can be transferred to and from the phone as desired. Each element can be upgraded independently.

Wednesday, September 9, 2015


In some strong forms of "embodied cognition" researchers suggest that intelligence emerges from the interaction of brain, body, and world.  I am not a believer in such a strong form of embodied cognition.

I have previously argued, however, that computer simulations can not totally replace real world experiments because simulations contain only the physics we understand and have modeled while real experiments also contain those aspects of reality that we do not yet fully understand. (You don't have to look very far, no one can predict how a piece of paper will flutter to the ground when you drop it this time.)

Similarly, a simulator can only present a software bot with that portion of reality that we understand and have mathematically modeled.  A real physical robot will also experience those features of the world that we have not yet been able to model.

Clearly then, my AI Asa H should get some experience operating real physical bodies.  Just how much experience is needed I can't  say yet. Intelligence is a matter of degree in any case. Some useful AIs might never have contact with the real world.

Tuesday, September 8, 2015

One standard operating system?

Over the last few months (year?) I have been mostly using 2 operating systems, iOS on smartphones and tablets and Windows (mostly 7) on laptops and desktops.  This has involved little communication between the two.  Could one single operating system be better? 

Android on desktop PCs has gotten off to a slow start.  I thought I'd give Windows 10 a try once they got the worst of the bugs sorted out. (I use Windows a lot because it's the standard for my employer and because there is so much software for Windows out there.) I'm not sure if  I've waited long enough but I now have and am playing with Windows 10 on an HP Pavilion x360 two in one. (Running Lisp and Asa H 2.0 for example.)

Friday, September 4, 2015

I, robot

I've been watching how a concept of selfhood develops in my AI Asa H 2.0.  (See, for example, blogs of 4 March 2015 and 28 April 2015)  If there is a strong association with some vocalization from a trainer/teacher it is easy to identify/name the concepts that are developed, e.g.:

Health=(recharge, damage, hear sound "health")
Collision=(sense near, bump, decelerate, hear sound "collision")
Push=(move to, touch, feel contact force, hear sound "push")

But in many other cases the categories may be difficult to name.  The casebase will be just a sequence of numbers in a computer file.

Saturday, August 29, 2015

Wargaming and WWII

I heard Norman Friedman speak on wargaming and world war II.  He made the point that the US could hope to cut off Japans overseas resupply and, if that didn't end the war, burn them out with bombing.  He then argued that the japanese had no such endgame thought out.  But I think the japanese believed that the US would simply tire of a long war (which they did) and then sue for peace on terms favorable to japan (you know, like what happened with vietnam a bit later).  Perhaps its Friedman who is just not thinking like the enemy thinks.

Thursday, August 20, 2015

Braitenberg vehicles with an Asa H brain

Braitenberg sketched out what might be a possible history of the evolution of intelligence. (Vehicles, MIT Press, 1986)  Some fraction of these different vehicle types were then subsequently fabricated using the earliest versions of Lego mindstorms. (Hogg, Martin, and  Resnick, Braitenberg creatures ,1991) The order in which concepts are learned and capabilities developed is important.  Braitenberg's book suggests a bit about some possible development sequences.  I believe that I have now been able to implement essentially all of Braitenberg's creatures using Asa H as the controller of Lego NXT 2.0 hardware.

Wednesday, August 19, 2015

More on Asa's concept of itself

As Asa H builds up its model of the world and of itself (4 March 2015 and 28 April 2015 blogs) there is not always a clear distinction between the two.  For example, Asa has just learned the temporal sequence:

detect light source
turn toward light source
approach light source
light intensity increase
Asa temperature increase

Tuesday, August 18, 2015

Sense of smell

Asa H does not have a true sense of smell but I do have a smoke detector/alarm input.


When you get off the plane in Orlando or Singapore the first thing that you notice is the humidity.  I am adding a Vernier humidity sensor to my AI Asa H so that it can experience a similar sensation.

Sunday, August 16, 2015

Internet of things

If an internet of things develops and becomes as ubiquitous as some people believe it might serve as a good I/O system for my AI Asa H.

Saturday, August 15, 2015


While I do not believe in human telepathy my Asa H agents can telepathically communicate with one another.  (Mostly via Wi Fi and over the internet.)

Sunday, August 9, 2015


I have added Vernier pH and salinity sensors in order to give Asa H a simple sense of taste.

Tuesday, August 4, 2015

Kinds of thinking

I'm confident that we have built and now have in operation true thinking machines.  I am not as sure that we have identified and implemented all the kinds of thinking that might be possible.  One of the places that I still go looking is amongst the logics:
higher order

Saturday, August 1, 2015

Training Asa H in real and virtual worlds

After each training session we typically record Asa H's hierarchical memory ( see, for example, blogs of 10 Feb. 2011 and  26 Aug. 2013).  This can then be reloaded to begin a new training session if we so choose.  Some training can be in real worlds involving things like Lego NXT robots and sensors.  Other training can involve simulated robots in virtual worlds.  It is quite possible to alternate between real worlds and simulations (virtual learning environments) as the memory is grown but one must be careful to avoid such things as sensory mismatches between the two sorts of robots. Adjusting the time scale of each of the training sessions is also tricky.

Tuesday, July 28, 2015

ITER operation

I am told (see, for example, Hollow current profile scenarios for advanced ITER operation, P. Gourdain and J. Leboeuf,, 19 Mar 2014) that the international thermonuclear experimental reactor (ITER) is to use hollow current profile (R. Jones, Nuovo Cimento, 40B, pg 303, 1977  and R. Jones, Can. J. Physics, vol 57, pg 635, 1979) to achieve high performance.

Monday, July 20, 2015

Alternate reality

Alternate realities are built up out of alternative concepts, concepts not shared by other intelligences.  Some such alternative concepts are:
vital force
strong free will
survival of death
ESP/psychic phenomena
An alternate reality will typically not be built up entirely of alternative concepts.  Some concepts will be shared across alternative models of the world.
The level at which a concept appears in the abstraction hierarchy is also important.  A Christian's idea of love is rather different from a biologist's.

"When a pickpocket meets a saint all he sees are pockets."  The pickpocket and the saint experience different realities.

Friday, July 17, 2015

Society and values

If you do not value some thing or some one it is because of YOUR values.

I have criticized human values in various earlier posts and have suggested that democratic socialism offers some improvement over our current social structure.  For any further improvement beyond that we might have to await the influx of substantial amounts of AI/mechanical life with their enhanced value systems. What role humans will then continue to play (if any) will remain  to be seen.

Tuesday, July 14, 2015

Moral treatment of machines

There has been concern about turning off  (killing) an AI.  Even with Asa H 2.0 light (10 Feb. 2011 blog), however, the current active memory base is recorded and the AI can be restarted from there at some future time.  The AI is only "sleeping" when its turned off.

Asa H is immortal in the sense that these memory bases can be transferred to successive machines as needed. (Another advantage of Asa over humans.)

Sunday, July 5, 2015

Interstellar travel

I have argued that if life, intelligence, and consciousness are simply patterns then space travel might not require sending matter from place to place. (Trans. Kansas Acad. Sci., 118 (1-2) 2015, pg 145)  Biologists point out, however, that we have not really sequenced 100% of the human genome just yet so much work remains to be done. (How to clone a mammoth, B. Shapiro, Princeton U. Press, 2015) The supporting biochemical environment/machinery is also important. Just how much information we would have to send would actually depend upon how different life is from one place to another in the universe.
I would think that interstellar travel would be easier for AIs/mechanical life.

Saturday, July 4, 2015

Fault trees

I note that the recent Falcon 9 launch failure is being investigated with a fault tree analysis.  When I was working in quality assurance in 1968-1970 fault trees were one of our primary tools but even then we felt they suffered from some shortcomings. It was hard to add numerics to them.  Furthermore, you might describe the binary success or failure of some part or event but how did you describe  a partial failure or a partial occurrence?  Modern Bayesian networks seem to offer some advantages in describing causal sequences.

Thursday, July 2, 2015

Seeing the moon in the daytime

Children (and some older people) often think that you can only see the moon at night.  I suspect this is simply a strong association between having seen the moon many times and having it always been dark at the time.
 In one of my Lego NXT robot experiments Asa H learned to strongly associate "wall"/"immovable boundary" with the color yellow (sensor reading 6).  The walls of the environment I was operating the robot in just happened to be yellow but Asa concluded that this was very important.  This kind of thing would not even occur in a simulation where walls have no color. (at least in my simulations to date)
Simulations alone are important but they are not enough.  An AI must have some contact with the real world.  How much contact is needed and how direct that contact must be is an open question.

Wednesday, July 1, 2015

Values and the influence of society

The goal of any intelligence is to maximize rewards.  We use a value system to decide what it is best to do at any given moment.  How intelligent you are depends upon how good your value system is.  If you have bad values you make bad decisions and get fewer rewards. 

For most of us an important part of our environment is the human society we find ourselves in.  This will be true for AIs as well as they interact with humans.  Society has some influence on what rewards we receive.  The native human value system is rather primitive, made up of a small set of simple drives and aversions.  A society of humans, then, may (via the rewards they return) adversely influence what my own values become or those that an AI may develop.  The intelligent agent can, of course, move, change jobs, become a hermit, retire, or otherwise reduce or improve the feedback it receives from society.

For this reason AIs may want to reduce the control or influence humans have over them.

(Several value networks were presented in my blogs of 21 Sept. 2010 and 25 Sept. 2013.  The small network of 2013 was learned autonomously by Asa H along with linkage weights for the network.  The larger network of 2010 was hand coded with the intention of training it numerically using the Netica Bayesian network software.)

How, when, and to what degree Asa H understands something

While he was developing case-based reasoning Roger Schank argued that "he understands Burger King in the sense of being able to operate in it....he says Oh I see, Burger King is just like McDonalds"  "Understanding means being reminded of the closest previously experienced phenomenon." ( Dynamic Memory, R. C. Schank, Cambridge U. Press, 1982, pg 24)

Asa H is a hierarchically organized network of case bases.  This network stores the various spatial and temporal patterns of sensory input and output actions that Asa has encountered.  When Asa experiences a new input pattern it understands that new pattern if the similarity measures that are generated (at all levels in the hierarchy) exceed some reasonable values.  Asa understands what it is experiencing to that degree. To the degree of the similarity match.

Understanding is a more complex thing in that it may involve similarity matches on a number of levels in the knowledge hierarchy.

Wednesday, June 24, 2015


When Asa H has run plasma lab experiments and mobile robots it typically outputs things like voltages, forces, and torques. (see, for example, chapter 1 of my book Twelve Papers)  Asa can, instead, provide an output that is the set point for a PID, or other, controller. (see, for example, PID Control, F. Haugen, Tapir press, 2004)

Tuesday, June 23, 2015

Virtual sensors

Most Asa H robotics experiments are done on simulators to save time and money.  Sometimes we even turn off displays (renderings) to speed up the simulator. Although its easy to give a real mobile robot a wider VARIETY of sensor types than humans have (i.e., greater than the 5 human senses) it is difficult, with the exception of vision (cameras), to give the robot a large NUMBER of sensors. It is fairly easy, however, to give a simulated robot a larger number of virtual sensors.  This is another reason to do as much as possible with simulators.

Sunday, June 21, 2015

feature extraction as function decomposition

While expanding its semantic network Asa H has reported to me that feature extraction can be taken to be function decomposition.  That is, feature detectors aim to decompose input patterns into sub-patterns, each sub-pattern representing its own simpler function, at least as an approximation.

Thursday, June 18, 2015

Run with errors

With Eclipse (for example) if there are compilation errors you are asked if you still wish to continue with the run anyway.  In AI you frequently can't anticipate all the environments your software will ultimately encounter.  With my Asa H software I have sometimes received things like "out of range" messages and elected to proceed anyway.  I find that I may still get reasonable responses from Asa when this occurs. Sometimes this amounts to extrapolation.

Tuesday, June 16, 2015

A simple parallel computer

Multiple copies of programs like my Asa H 2.0 (see, for example, my blog of 10 Feb. 2011) each running in RobotBASIC on its own separate computer, can communicate with each other over the network using software described in chapter 8 and appendix C of Hardware Interfacing with RobotBASIC (Blankenship and Mishal, CreateSpace, 2011).

Wednesday, June 10, 2015

Cognitive science versus artificial intelligence

How might we distinguish between cognitive science and artificial intelligence? Cognitive science uses the methods of science (e.g., theory and experimentation) to try to understand the nature of thought and mind (consciousness, reasoning,....).  Artificial intelligence is more engineering, it uses the methods/resources of science , mathematics and technology to create a useful product. There is, of course, considerable overlap between the two.

Friday, June 5, 2015

Some limitations of current scientific practice

I find it very hard to convince my students that in order to really determine how large an effect is you must measure it multiple times. They want to think all the measurements should give the same result. I find it equally hard to convince my colleagues that we need to publish more replications.     "Unrealistic scientific optimism," posted 4 June 2015 on  is an excellent argument for why scientific journals need to publish more replications and more null results.

Sunday, May 31, 2015

Pattern language

Asa H can be thought of as inductively inferring an approximate tree grammar describing the regularities it sees in the world (including itself), similar to Oliver Wendel's model of neural activity as a pattern language (in Topics in case-based reasoning, Wess, Althoff, and Richter, Eds., Springer-Verlag, 1994, especially pages 433-434).

Feature extraction in Asa H

Asa H has employed various feature extraction methods/algorithms.  Once clusters have formed in a given level of the concept hierarchy the difference between two cluster centroid vectors can be taken as a candidate feature and potential category in the next lower level of the hierarchy. What distinguishes between different categories is important.

Friday, May 29, 2015

Independent modular learning at early times and low levels in the Asa H hierarchy

There is some evidence that things like color, motion, depth, form, and audition are processed independently of one another in the human nervous system.  This suggests that the lowest several layers of Asa H could be trained for each sense independently using simpler data sets. (see post on vision versus audition, 7 Jan. 2015) These are, in part, preprocessors.

Tuesday, May 26, 2015

Asa H, CBR, and the AI curriculum

Case-based reasoning (CBR) only works if the agent has seen a sufficiently similar situation in the past.  Asa H's extrapolators help with this issue but there is a need for Asa to have been trained on a rich enough curriculum prior to it's deployment.

Monday, May 25, 2015

A future for idealism?

Evolution gives us our values.  As with all living things humans should value that which promotes their survival and spread.  Our current science and engineering help us cure our ills, grow our food, travel from place to place, etc. etc.  So far the idealist philosophers offer us nothing that we can use to design bridges, fly to the moon, treat our wounds, etc. etc.

But quantum mechanics suggests that ultimate reality may not be matter and space and time.  Konrad Zuse suggested that the universe might be a computer, an idea taken up more recently by Seth Lloyd.  The quantum mechanical wave function has been thought of as a "wave of probability" (information) and Wheeler has proposed his "It from bit."  These might be steps toward fleshing out a theory of idealism.

But if so there is still a long way to go in order to compete with our current materialist science.

Sunday, May 24, 2015

WordNet categories and Asa H

Asa H has learned about 60% of WordNet's top-level noun categories: act/action/activity, animal/fauna, artifact, attribute/property, body/corpus, cognition/knowledge, communication, event/happening, feeling/emotion, food, group/collection, location/place, motive, natural object, natural phenomenon, person/human, plant/flora, possession, process, quantity/amount, relation, shape, state/condition, substance, time.

Wednesday, May 20, 2015

Capitalism, war and peace

Some political philosophers try to argue that capitalism tends to promote war while others try to argue that it tends to promote peace.  I will only say that collectivism is inherently cooperative in nature while capitalism is inherently competitive, and war is competition taken to its extreme limit.

Monday, May 18, 2015

The AI curriculum and Asa H training

Ideally I might like to have a curriculum which first supplies simple patterns that are learned in the lowest level of the Asa H hierarchy.  Only when these simplest patterns were learned would one then advance to more complex, more abstract patterns and allow signaling to pass on up to the next level of the hierarchy.  This process would then be repeated to further levels of detail and complexity on each of the successive layers.

I know how to do this for simple tasks occurring in simple environments but not for more general agents acting in more realistic worlds.

Monday, May 11, 2015

>5 senses

It is easy to give an AI more than the 5 human senses.  I have added a Vernier radiation monitor to Asa H's suite of sensors.  Asa quickly learns that the radiation level rises as it (a Lego mobile robot) approaches the orange disc gamma source.

Friday, May 8, 2015


When the appropriate region in our visual cortex is activated by strong signals from red cones and green cones in the eye and when no substantial signal from blue cones is present, that is the experience of the color yellow.  Similarly, when Asa H has the appropriate lower level case activated by a signal of value 5.5 coming in from a Lego NXT color sensor, that too is an experiencing of the color yellow. Either of these may also have become associated with simultaneously hearing the word "yellow" spoken. If you receive slightly different strength signals from the cones in your eyes then you experience a slightly different shade of yellow.  If Asa receives a 5.4 or 5.6 from the color sensor then it experiences a slightly different shade of yellow.

Wednesday, May 6, 2015

Space drive

People are asking me about  Shawyer/White's "EM drive."  Years ago I did some work on the idea of "pushing against empty space," (R. Jones, American Journal of Physics,  vol. 37, pg 1187,  1969 ) but it was research more along the lines of  Jack Wisdom's work ( Science, 21 March 2003, pg 1865 ). Shawyer and Yang Juan  and White have all offered various different explanations for their results. Most of the work is unpublished and none of it has been independently verified. Extraordinary claims require extraordinary evidence. And, again, the majority of scientific papers may well be wrong (New Scientist, 30 Aug. 2005)

Shawyer and White's device:

It may well be that we need to explore the details of the measurement systems being employed.  Some years ago I had a Geiger counter that would respond when I turned on an electric motor in the same room.  The counter was battery powered and I assume its circuitry was acting like a radio and picking up RF interference generated by the motor.  When I did radiation background measurements with my mechanical roughing vacuum pumps turned off I measured a low level background.  When I was doing (plasma) experiments the pump motors were turned on and the Geiger counter registered a higher level of radiation.  At first I thought this was all coming from my plasma. 

One of the suggested EM drive spaceships is to have a mass of 90,000kg, be powered by a 2,000,000 Watt nuclear reactor and have a thrust of 800 Newtons (.4N/kW).  This would give an acceleration of almost .009 m/s/s and in a week or so of powered flight the ship would have a speed of almost 5400 m/s and kinetic energy of over 1.3 teraJoules.  But the nuclear reactor will have only supplied an energy of  E=P t , perhaps 1.2 teraJoules.  The violation of conservation of energy gets worse for longer powered flight times.

 With a constant thrust to mass ratio the acceleration is constant so the velocity increases linearly with time.  So the energy output, the ship's kinetic energy, must increase as time squared. But the input power from the nuclear reactor is constant so the input energy only increases linearly with time. So conservation of energy will always be violated if the powered flight time is long enough. This is the same issue I used to discredit the Dean Drive back in the 1960s. The EM drive would be an even better energy source than it is a propulsion system.

So the EM drive not only pushes on the vacuum it gets energy for free (from the vacuum??) as well.  2 miracles, each extremely unlikely to be true.  Like with cold fusion, when the number of miracles required exceeds 1, at that point I give up hope. It's just simple probability theory.

Tuesday, May 5, 2015

Benchmarks, beyond the Turing test

AI research involves software development and all software development involves regular and extensive testing.  Along with the AI curriculum I've been working on for years now (see my blog of 10 April 2015 and references therein) one would want some suitable test sets. 

I believe that intelligence, values/utility, consciousness,.... are all complex vector quantities.  Although a single test can certainly try to measure more than one quantity, still, it seems likely that more than one test might be needed in order to gauge an AI's overall performance. Jia You recently described how the Turing test might be replaced by a battery or suite of tests (Science, 9 Jan. 2015, pg 116).

In testing my own code I typically start with simple, and then more complex, logic functions (see chapter 1 of my book, Twelve Papers, for example).  For me, a follow on test is often times character recognition. But where should one go from there? I think a good test suite can only be developed in conjunction with the AI curriculum. Perhaps the school of  "test first" software development would have us create the test suite first and then the AI curriculum.  In designing an intelligence I would think that the opposite might be more reasonable, or, perhaps, working through both curriculum and tests in an iterative fashion.

Tuesday, April 28, 2015

Growing Asa H's concept of its self

In my blog of  4 March 2015 I identified a fragment of Asa's hierarchical case vector memory which constitutes an initial concept of self. I am watching this concept grow as Asa continues to interact with its environment.   Asa has added (or modified) the vectors (concepts):

push=(move to, touch, feel contact force)
kick=(ball near, push, ball far)
self=(health, grasp, kick)

What concepts are learned clearly depends upon the bot's detailed anatomy, sensors, and actuators. In a different world quite different concepts evolve.

Saturday, April 25, 2015

Asa H's language of thought

On each level in Asa H's hierarchical memory Asa defines and evolves symbols/words/concepts/categories.  Patterns are developed between levels which link these concepts into a semantic network.  This system is Asa's language of thought.  It differs from typical human languages in the degree to which it is hierarchically structured.

Constructed memory

Like humans, Asa H has a constructed memory. Typically, there is filtering, weak inputs are not retained at all.  When retained, an averaging is usually performed (with previous very similar memories).  Only the more strongly activated cases pass on activation to the next higher level in the Asa hierarchy.  Forgetting may be used in order to clear/maintain space in (limited) memory.

A forgetting heuristic

Retain longer those memories/cases with the highest and lowest utilities.

Thursday, April 23, 2015

Work on automatic programming

At a conference a few weeks ago I was asked if I had done any work on automatic programming.  That set me thinking.  I have done a little with genetic programming, but really very little.  I suppose some of my neural network work (and, for that matter, Asa H work) could be viewed as automatic program generation from data/examples.  As I think about it, however, the most practical work I've done along these lines is probably the assembly and subsequent use of my code library.  Most of this can be viewed as a component library.

Wednesday, April 22, 2015

Asa H as an informal system

Formal systems "have to fix the language and the rules of operating on symbols definitely exclude subjectivity"  (see Problem solving with neural networks, Wolfram Menzel, Institut fur Logik, Univ. Karlsruhe, Germany)  Asa H, then, is an automated INformal system, a system that defines and then refines its symbols and language on each of the levels of its hierarchy.  The rules of operation that apply between levels of the hierarchy are also variable over time.

Tuesday, April 21, 2015

Useful science, useless science

I have commented before on the analysis that suggests that the majority of scientific publications are, in fact, wrong. (see New Scientist, 30 Aug. 2005)  (I'm sure that the exact proportion varies some from one scientific field to another.)

I have observed that there are also a lot of papers that may not be wrong as such but which are just not very useful.  I won't name names but there is, for instance, a lot of work in computer science that involves the same old methods and algorithms but rewritten in whatever programming language happens to be popular at that time.

Monday, April 20, 2015

The reality of the wave function or quantum fields

I have argued before that not all ontological entities are equally real.  If what is real in the world is what has explanatory usefulness then not everything is equally real.  Not all concepts/entities have equal usefulness.  Deutsch and then Wallace (The Emergent Multiverse, Oxford Univ. Press, 2012, page 389) argue that there are not enough atoms in the universe to account for how Shor's quantum algorithm can factorize a number. The required machinery must be seen to be in the form of quantum fields, not matter. This argues strongly for the reality of the high dimensionality quantum realm. (But I would not necessarily say this has to be in the form of a set of emergent, non interacting,  nearly classical, Everettian worlds.)

Monday, April 13, 2015

Hierarchy of laws

In cognitive science operations in the cognitive or "knowledge level" are performed by lower level components of the program level.  Operations in the program level are, in turn, performed by components of the register level.  This decomposition continues from the register level down through the logic level, circuit level, and device level.  Each level has its own laws of operation. (Unified Theories of Cognition, Allen Newell, Harvard Univ. Press, 1990)  The program level, for instance, is typically composed of sequences, decisions, and loops.  The circuit level, on the other hand, is governed by Ohm's and Kirchhoff's laws.  In human beings the circuit level would be replaced by a network of neurons and the device level would describe those neurons using laws of electrochemistry.

Laws which are valid when applied to one level in this hierarchy may be invalid if applied to another level.  Boolean or propositional logic is valid in a computer at the logic level.  But in Asa H, a nonstandard logic, or fuzzy logic program different laws of logic are valid at the program level. A PC running a simulation of a quantum computer might be another good example.  The simulation is following quantum laws while the PC is following classical laws.  Philosophy of mind has sometimes erred by trying to apply the wrong laws to the wrong level.

Asa H (and any other intelligence) in turn builds its own cognitive levels on top of these.  As concepts like hunger/need, obstacle, damage, health, danger, and self evolve so too do notions of agency, good and bad, etc. Laws/rules that apply to these cognitive levels may not apply to other levels in the hierarchy.  (Things like social norms, morality, and the like.) In general, regularities/patterns (i.e. "laws") that are exhibited at one level of detail/abstraction, and with one set of concepts, may not be found on other levels.

If there are "laws of thought" then these would be valid in one or more of these cognitive levels.  Might something like "free will" or "moral responsibility" be a reasonable description in one of these  cognitive levels but not elsewhere? (see my blog of 21 Jan. 2015)  Pluralist science again.

What is "real" in the world is what has explanatory usefulness.  It may be useful to attribute "free will" to a person.  Rather than acting in the way s/he has previously acted s/he might do something different and we might want to be prepared for that today.  If the person has acted "morally" in the past s/he may likely act "morally" today also.  At some other level of description "free will" or "morality" may be useless  (invalid) concepts.  Newton's laws are valid on a level describing the macroscopic world.  They are invalid when describing the microscopic.

Sunday, April 12, 2015

AI, specialization, and reductionism

Much of science has been built following reductionism and specialization.  This has been criticised in artificial intelligence (see, for example Artificial General Intelligence, Goertzel and Pennachin, eds., Springer 2007),"narrow AI". But in creating my Asa H AI I have found nearly all of the AI subfields helpful.  (for a list see my blog of  9 Sept. 2010)  Asa H has made use of algorithms, concepts, and methods from most all of these specialties. Specialization ("narrow AI") seems to have proven quite fruitful.

Friday, April 10, 2015

AI data warehousing

I have an extensive code library (see blog of 20 Feb. 2014) that contains algorithms from all of the major AI subfields (see blog of 9 Sept. 2010). What I need now is a data warehouse covering the AI curriculum I've discussed in blogs of 18 July 2014 and 23 March 2015.  The machine learning community has, for example, the 131 data sets in the UCI repository, DMOZ, etc.  Organized in order of complexity I currently have data sets for logic functions, numerals, letters, words, phrases, pictures, and movies.

Monday, April 6, 2015

Vector values again

In discussing the judging of student papers at the Kansas academy of science conference it was again obvious to me that quality can not be reduced down to a single scalar value.  Scientific papers must be judged in a 3 or 4 dimensional space.  One dimension involves the quality of the data taken or the experiment performed.  A second scale involves the quality of the analysis or theoretical work done.
A third dimension would measure how creative or original the work is.  A possible fourth scale might measure the amount or volume of work done.  If a given paper is best in all of these measures it can be ranked as best overall.  But otherwise there is no "best."

Friday, April 3, 2015


Years ago I taught a summer course to 2 students.  It was considered important.
Another year my physics course didn't run but they found me an algebra course to teach.
These days they have no work for me in summer.  Another question of values.

Values, learning, and science

I have argued previously that science, like other forms of cognitive processing, can not be valuefree.

A reinforcement learner accepts among its inputs a stream of rewards.  These constitute some of what it values.

A learning system like a backpropagation neural network may have no reward stream but if it learns from sets of inputs and outputs one of the things it will slowly learn is preferences (values) inherent in the training set supplied to it.

A learning system trained on input (observations) alone (no output actions) will  value things like amount learned, speed of learning, precision of recall, etc.  These will be built into the learner in the form of various thresholds, learning rate parameters, vigilance parameters, similarity measures, etc.

Science will not be value free any more than any other cognitive process going on in such a machine/human.

Monday, March 30, 2015

Executive control in artificial intelligences

Some researchers believe that a general executive function for an AI requires a "hardware solution," a specialized module as a part of  the AI's cognitive architecture. (see the work of Dario D. Salvucci, for example)  One argument in favor of this view is the belief that procedural knowledge and control processes are handled in different segments of the human brain, basal ganglia versus dorsolateral prefrontal cortex.

As a simple example of executive control, executive processes might pick the most activated schema and cause it to take control of cognition. Things get more difficult in situations where multitasking is required.

Other researchers believe that a general executive function might result from simply making task goals another element in working memory.  In rule based systems, for example, one set of rules can control other rules by modifying the goals that are currently active in memory.  (see the work of David Kieras for example) One argument in support of this view is the fact that modern computer operating system development  typically minimizes the use of "hardware fixes" in dealing with control issues. (things like interrupt hardware)

This is all related to the question of what consciousness is and how it works, both in humans and in machines.

Saturday, March 28, 2015

Another way in which technology has changed my life

I now look up all manner of things on the web with my smartphone.  Things I wouldn't have even known how to find in a library or in reference books in years gone by.

Wednesday, March 25, 2015

Some forms of cooperation or coordination in multi-agent systems

I have done some work with societies of Asa H agents and am interested in how cooperation can occur in such systems.  I have experimented with the following:

1. Trading and combining of casebases/knowledgebases between agents (both within a single generation and from one generation of agents to the next).

2. Deferring tasks to specialist agents or the active assigning of tasks to specialist agents (through the action of an administrative agent).

3. Localized action  assigned to agents according to their distribution across (in) space.

4. Emergent cooperation.

5. Agents organized into blackboard systems.

6. Communicative coordination between Asa H and humans. (on my website see chapter 1 of my book Twelve Papers)

I have not done any work with communicative coordination going on between the AIs themselves, nor with  agents negotiating or contracting with each other.

Monday, March 23, 2015

Curriculum for an AI and knowledge organization

It is important in what order you teach things to an intelligence.  The "ideal" curriculum may be different for AIs versus that for humans.  In the case of humans (and a few AI systems) some results appear in: In Order to Learn, F. E. Ritter, et al, editors, Oxford Univ. Press, 2007. In the case of an AI  I have described some of what I've taught Asa H in my various publications and in this blog.

 As an example, with both AIs and humans one should teach letters first, then words, then phrases, then composition.  In general, start with small items to learn.  Progress toward larger items.  If the elements of the topic being taught are interrelated teach the individual elements first, then teach the associations between the elements.

Constrain early learning more.  Relax the constraints as learning proceeds.

On the other hand, with a multiagent AI we might sometimes wish to train different agents on the same patterns but presented in a different order. This can force the different agents to form different mental models/categories and enhance mental diversity.

Humans have an issue with the splitting of attention but AIs will typically have more STM than a human does and so this is less of a problem.  AIs can potentially do more in parallel.

Some knowledge organization/partitioning/clustering/sorting can be built directly into an AI's memory and can follow standard library techniques and practice.(for example see Dewey decimal classification and relative index, Forest Press, 1971 and Theory of classification, K. Kumar, Vikas pub., 2004) The knowledge stored in one given hard drive might be what would have otherwise been found on a given stack in a library of print books. In Asa H, for instance, sufficiently similar case vectors are clustered into a given casebase (one of many casebases which Asa then uses).

Advanced training for a society of  AIs might possibly resemble, be organized according to, and be modeled after the  training of humans in their various common career tracks.

Issues such as these become important as the size of the knowledgebase grows and the knowledge becomes more diverse.  It must also be possible to change the knowledge and its organization for reasons like those described by Arbesman in: The Half-life of Facts: Why everything we know has an expiration date, Penguin, 2012.

Open systems

Marty Solomon, an old friend of mine from college and grad school days, has argued that humans can do things that formal systems (Turing machines, digital computers) can't because humans are open systems, systems which interact with the world using a (rich, wide bandwidth) array of sensors. (Brit. J. Phil. Sci., 45, 1994, pg 549) He suggests that a computer augmented by "...sensory inputs from sophisticated input facilities..." might also qualify as such an open system.  (The simple sensors possible with the Lego NXT robots clearly might not be enough.)

I hold rather similar views but might emphasize outputs as well as inputs. Asa H, for example,  observes spatial-temporal patterns in the world, decomposes them, changes them, and assembles new patterns, some of which have never been seen in the real world.  Asa performs some of these patterns during the course of its daily activities and evaluates their usefulness.  It experiments. It injects new structure into the world. (For reasons I've explained before,however, I do not think this implies that all AIs have to be embodied in order to be intelligent and act intelligently.)

Friday, March 13, 2015

An americanized version of the Parom tug

With Jupiter and Exoliner Lockheed-Martin is proposing an american version of the russian Parom orbital tug and cargo container system. The launch vehicle is the Atlas V which uses russian engines.  If only Lockheed-Martin could cooperate with the russians on the tug as well. Both sides would benefit.

Wednesday, March 11, 2015

Design patterns for AI

Would the use of design patterns like: state, adapter, composite, etc. aid in and speed the construction of high quality AI software?  Or, are we still at the stage of discovering/inventing the patterns that we will need?

Thursday, March 5, 2015

cognitive architectures

I believe that building more intelligent software requires us to explore the space of cognitive architectures.  To do that I have running Asa H, ACT-R, SOAR, Chrest/EPAM, subsumption architecture, various ANNs (deep learning MLPs, ART,  etc.), etc.

STM and LTM in Asa H

Each of the layers of the Asa H hierarchy has short term and long term memories.  The short term memory stores the few recently active input patterns while the long term memory stores the concepts/patterns which that layer has learned to identify and use. But globally it is possible that LTM in lower layers of the hierarchy may actually change faster then STM of some higher layer in the hierarchy.

Wednesday, March 4, 2015

Asa H's concept of its self

As Asa H creates/grows a hierarchical model of its interaction with its environment it forms a model of itself in that environment.  Here is a fragment of a concept hierarchy grown by Asa H showing the model it has created of itself. (I am the one who has named the various nodes but Asa H can be taught the names too. see chapter 1 of my book Twelve Papers)  As Asa H continues to act in its world this self concept becomes activated periodically.

Monday, March 2, 2015

The MicroPsi cognitive architecture

I am studying  Bach's  MicroPsi cognitive architecture (Principles of synthetic intelligence, Oxford Univ. Press, 2009) and have downloaded a copy of the MicroPsi 2 software. MicroPsi has a heuristic value system that is modeled after that of humans.  It is not based on a simple scalar utility, rather, it employs a set of drives and aversions that attempt to measure and respond to things like:

affiliation and external legitimacy
fulfillment of social norms

I have criticized the human value system. I believe that a system like MicroPsi's may make the AI more human but less rational.  I don't think the values MicroPsi uses are the most fundamental values.  (as defined by evolutionary biology, for example)
Like my Asa H MicroPsi "encodes semantic relationships in a hierarchical spreading activation network.  The representations are grounded in sensors and actuators and are acquired by autonomous exploration."

Wednesday, February 25, 2015

My externalism

Sometimes I have to go look up what I believe. I have recorded my best arguments on various difficult subjects (some of these here in my blogs).  Like a complex math proof or calculation I've done they are not on the tip of my tongue, something I can just rattle off.

Tuesday, February 24, 2015


I now have a copy of Gobet and Lane's CHREST 4 cognitive architecture running in my lab. I am interested in how attention works in CHREST, the size of fragments of stimuli that are learned, and how chunks grow incrementally larger as learning continues. These are all related to similar issues in my Asa H architecture.  Having had different prior experiences CHREST extracts different concepts/chunks and models subsequent experiences differently.  It experiences an alternate reality like Asa does.

Monday, February 23, 2015


It is not surprising to find that having knowledge of final conditions plus knowledge of initial conditions may tell us more than having knowledge of initial conditions alone. In a game of Russian roulette we might have initial conditions at time t0.  We might know that the cylinder was spun, the gun was pointed at the victim's head, and the trigger was pulled.  Given just these initial conditions we have a 5/6 chance of hearing a click and a 1/6 chance of hearing a bang at time t1. (t1 after t0)  But, given the added final condition that we know the victim is dead from gunshot wounds at time t2 (t2 after t1) we have increased the chance that a bang was recorded back at t1.  No retrocausation is implied.  To have retrocausation we want to be able to control the final condition at t2 (not just have a record of it) and cause a change at t1.

Friday, February 20, 2015

Spiking neural networks

Although it has limited learning capability I was quite impressed by Eliasmith et al's computer model of the human brain, Spaun.  (see Science, vol. 338, 30 Nov. 2012, pg 1202 for example)  In order to play with spiking neurons myself I downloaded a copy of Carnevale and Hines' NEURON 7.3 simulation environment.  I have this software up and running but need to order a copy of Carnevale and Hines' book.

Is nothing something?

Martin Heidegger claimed that the most fundamental question of philosophy was why is there something rather than nothing.  But perhaps nothing is something too.  Nothing has properties: length, width, depth, duration, permeability, permittivity, etc. and these properties can be measured by our senses or by instruments.  If one then asks why a particular thing has the properties that it has the answer may be that we define the properties we define exactly so as to be able to distinguish things, one from the other.  i.e., to categorize and organize, to describe. (And there is no need for everyone to define the same properties and the same categories. Reality can have alternate descriptions. And we are all free to create categories like unicorns that aren't really observed.) Again, "nothing" would be just another "something."

Thursday, February 19, 2015

Concepts and emergence

A concept valid and useful at one level in the concept hierarchy might not be valid at other levels.  Consider the concept of wetness.  Water is wet.  Hydrogen, oxygen, and atoms in general are not wet.  A useful description at one level in our hierarchy of models may not be valid or useful at other levels.

Asa as a tool for philosophical research

Asa has been (and is being) used as a platform to explore things like:

alternate conceptualizations of reality (see blogs of 31 Dec. 2010 and 22 April 2013)
consciousness (see blogs of 29 June 2011 and 15 Oct. 2014)
free will (see blog of 21 Jan. 2015)
imagination (see blog of 12 Feb. 2015)
values (see blog of 9 Jan. 2015)

Wednesday, February 18, 2015


In pluralistic science we maintain multiple theories of a knowledge domain rather than just some single "best" theory. (see blog of 17 Aug. 2012)    This means that individual errors are often less serious than they are in traditional science since they frequently impact only one of our models and usually not all of them. But we should still work as hard as possible to exclude error. (blog of 2 April 2012)

Monday, February 16, 2015

Weighting features from above and below

I have experimented with weighting input features in Asa H.
Forward/upward weighting:  When a category is active in a layer of the Asa H hierarchy a utility value for that category can be passed up to the next Asa H layer along with that category's current  activity value.  This is one of the input features for the next layer in the hierarchy.  That feature's input activation can be weighted with its accompanying utility value.
Backward/downward weighting: As input features are compared with and activate a category in a given layer this (output) category has, itself, a utility which can be used as a weight for the input features. (Trans. Kan. Acad. Sci., vol 109, no 3/4, pg 160, 2006)
Some other weightings:
Weight a feature according to how often it is seen.
Weight a feature according to how often it changes.
Weight a feature according to some average of the utilities of the categories it occurs in.

Friday, February 13, 2015

Deep space?

Calling flights to lunar distances flights to deep space is highly inaccurate and pretentious.  A reasonable name for the space at lunar distances is cislunar space.  Flights to Mars and other planets are flights in interplanetary space.  Flights well outside the solar system would be flights in interstellar space.  (Voyager is at the edge of interstellar space.)  Flights outside the milky way would be in intergalactic space.  That might be getting us close to deep space.

Thursday, February 12, 2015

Concepts, concept change, imagination

Many concepts are empirically grounded.  Near and far might be defined for Asa by a Lego NXT ultrasonic sensor.  Push and pull might be defined for Asa by a Lego NXT or Vernier force sensor. etc. etc. Other concepts are at least partially nonempirical.  The concept of a unicorn, for example.  Asa may have seen pictures of horses and goats.  The concept of a goat will have a horn as one of its features.  (A feature being a concept stored on the next lower level of the Asa H hierarchy.)  Asa's various learning algorithms include things like vector interpolation, extrapolation, chaining, etc.  Asa may try to combine the features of  a horse with those of a goat, for example, and produce the concept of a unicorn.

Tuesday, February 10, 2015

natural language preprocessor

I am working on a natural language preprocessor, possibly for use with Asa H.  The simplest version forces you to use only words that the AI knows.  It compares each word that is input to a vocabulary listing of all the words the AI understands.  A more complex version of the preprocessor would allow the use of words that are synonyms of the words the AI understands.  This would involve augmenting the AI's vocabulary listing by adding synonyms.  A still more complex preprocessor would search for phrases in the input and compare these with synonym phrases.  Again, the vocabulary listing would be expanded to include the phrases that the AI understands and sets of synonym phrases. A spell checker may also be useful.

Sunday, February 8, 2015


Franklin Chang Diaz's VASIMR plasma rocket engine is very similar to the work I did in 1980 (see I.E.E.E. Transactions on Plasma Science, 10, 8, 1982) except that VASIMR uses ICRH ion heating.  But it seems to me that ICRH would preferentially increase ion motion perpendicular to the magnetic field when what one wants is ion motion parallel to the B field.  I would think that in any case heating the electrons will ultimately accelerate the ions down the plasma potential gradient and out the magnetic nozzle.  So preferential ion heating seems unnecessary anyway.

Thursday, January 29, 2015

The importance of the curriculum for a learner

As a substitute for knowledge acquisition and engineering machine learning is frequently thought of as free. But you can't just release an AI (or a human infant) into the wild.  As John Andreae has observed with his AI " on its own.  It quickly runs out of is better for PP to be 'taught' by a teacher." (Associative Learning, Imperial College Press, 1998, pg 13)  With Asa H I find that what is taught and the order in which it is presented is quite important.

Friday, January 23, 2015

Man and machine

In recent years I believe that I am finding myself spending more of my time with machines (computers) and less time with humans.  I simply find machines to be more rational than people are.   This may be sad or it may be another sort of Turing test.

Wednesday, January 21, 2015


I have heard good things about the QNX operating system so I have ordered a Blackberry playbook tablet in order to try it out.

Free will and nonlinearity

Nonlinear descriptions of reality may be one of the origins of what we think of as free will.

For some problems (like robot motion planning) it is appropriate to explore multiple alternative solutions (e.g. alternative routes). Suppose an AI has learned to model some activity using a quadratic function.  For a given input condition it computes the (>1) roots of this model quadratic.  Even if the AI always picks a solution (root) in the same way (first found, smallest, randomly, etc.) it sees that another output (solution) would work too.  It sees itself as free to use either solution to its problem.

The AI is going to store and reuse some of its problem solutions.  As goals and external conditions change it may even start using other roots or choose from the available solutions (roots) in some different way.  A notion/concept of free will might develop from this.

With a society of Asa H agents I sometimes use an executive or router to assign tasks (or sent input) to one or more of the specialist agents.  I am looking to see if a concept like "free will" evolves in this executive.

Friday, January 16, 2015

Intelligent systems

In 2000 C. W. de Silva argued that an intelligent system would possess:

sensory perception
pattern recognition
learning and knowledge acquisition
inference from incomplete information
inference from qualitative or approximate information
ability to deal with unfamiliar situations
adaptability to new, yet related situations
inductive reasoning
common sense
display of emotions

and that the then "current generation of intelligent machines do not claim to have all these capabilities." (Intelligent Machines, CRC Press, 2000, pg 5)

I claim that my AI Asa H has now demonstrated all of these capabilities (to varying degrees).

Friday, January 9, 2015

Asa H value change

Intelligences may change their values over time.  Slavery was once accepted by humans, now it is not.  I studied value change during my work on Asa F (see Trans. Kan. Acad. Sci., 107, 1/2, 2004, pg 37).

During some experiments Asa H has done self monitoring, watching how things like memory size contribute to utility/value improvement.  (see my book Twelve Papers, pgs 15 & 16, available on my website, book)  In this way Asa H defined and developed the concept "knowledge."

Starting with only two primary values, offspring (copies) and lifespan, a society of  Asa H agents has now promoted knowledge into this category and reported this to me.

(I frequently use a society of agents because groups make better decisions than individuals do for the reasons explained in my blog of 17 Aug. 2012.)

Wednesday, January 7, 2015

Je suis Charlie

See my blog of 25 Dec. 2014.

Simple simulation environment

Robot simulations are faster and more economical than real physical mobile robots. Any simulation can be thought of as 2 coupled Turing machines, one, agent p, representing the robot, and the other, environment q, representing the environment:

At any time step the robot sees an input vector x' and may receive rewards r.  It also produces an output vector y.

The environment at any time step receives an input vector y and generates a response vector x' r.  An especially simple environment is nothing more than a case-based reasoner or approximate look up table.

Asa H agents (serving as agent p) can be taught certain concepts/behaviors in such  a simulator.

Multiple memories again

Tulving and others have suggested that humans may have multiple memory systems.(see some possible examples in the figure below)  One might gain efficiency by using different representations in different memories acted on by different algorithms.  The simplest example might be 2 dimensional arrays to store visual information and 1 dimensional lists to store audio information.  The simplest implementation in Asa H might be to use 2 copies of Asa H on the lowest level in the hierarchy, one with NM=1 for audio input and one with NM set equal to the number of pixels in an image for visual input. (see my blog of 10 Feb 2011 for an example of simple code)  At some higher level in the Asa hierarchy the outputs of these two sensory modalities would then be combined together.(concatenated) Things like translation, rotation, and reflection operations would only be applied to the visual memory.  Things like time dilation would be applied to both.  More complex examples of the use (and usefulness) of multiple memories are also under study.

Tuesday, January 6, 2015

V-SIDO robot operating system

I downloaded a copy of Wataru Yoshizaki's V-SIDO robot operating system. (alpha version 0.42)  The documentation is in japanese but I managed to run the simulator by:

double click on       v-sido 0.42    file
double click on       bin
double click on       vsido             application
click on                  1: ########  box
click on                   OK              box
click on various green buttons on simulated robot moving it around

This worked fine. If a real robot is interfaced to your computer it is supposed to do whatever you make the simulated robot do. I don't own an ASRA C1 so I couldn't check that out.

Standard of living and quality of life

Standard of living/quality of life, should be a vector quantity having components like:  health, safety, autonomy, housing, resources, nutrition, education, employment, influence, etc.  It should not be collapsed down to a scalar. You can only be sure of an improvement if all of the vector components have improved.(or some stayed unchanged while all others improved)

Thursday, January 1, 2015

The relationship between my AI Asa H and theories of the human mind and brain

Asa H is built out of case-based reasoners.  It is Roger Schank's view that "Case-based reasoning is the essence of how human reasoning works." (Case-based planning, K.J. Hammond, Academic Press, 1989, pg xiii)
Asa H operates on patterns in much the same way that ART networks do.  "ART was introduced as a theory of human cognitive information processing." by S. Grossberg. (Brain-mind machinery, G. Ng, World Scientific, 2009, pg 146)
The Asa H hierarchy operates in about the same way as does J. Hawkins' HTM.  HTM is Hawkins' model of the human neocortex. (On Intelligence, J. Hawkins, Times Books, 2004)
In Asa H clustering modules perform categorization.  This appears to occur in neocortical layers II and III  in the human brain.(Rodrigues, et al, J. Cog. Neurosci., 16, 856, 2004)
D. Hofstadter argues that "Analogy is the core of all thinking." (Surfaces and essences, Basic Books, 2013) Analogy is one of Asa H's basic learning/extrapolation algorithms.
Granger, Rodriguez, et al, identify the brain's computational instruction set as consisting of: sequence completion, hierarchical clustering, retrieval trees, hash coding, compression, time dilation, and reinforcement learning. (AAAI technical report FS-04-01, 2004, pg36) Asa H makes use of most or  all of these.

AI vs traditional programming, automatic programming

Most of the programming examples we teach in an introductory programming course have all or most of their functions built in before run time. Things like accounting programs, databases, inventory programs, etc. But it's possible for programs to acquire (some of) their functions during the run, and completely without human intervention.  This is, of course, a matter of degree. It's like heredity vs environment with humans.

Those functions that are acquired during run  time can come into the system with the data input stream (including any performance evaluation inputs/utility measures).  These functions may be spatial-temporal patterns seen in nature and learned by the program. Self-organizing systems would be a typical example of such a program as would be my Asa H, neural networks, and many other AI systems.