Saturday, September 12, 2020

AI curriculum and vocabulary

The best curriculum for training any given AI agent is probably dependent upon the specialization that that agent will take up. For the case of an artificial general intelligence I've felt that perhaps one should begin by giving the agent something like the set of concepts listed in my blog of 1 October 2015, then filling in the remaining concepts needed for the Toki Pona language. From there one can build up the vocabulary of Ogden's Simplish language. After that, reading of dictionaries and an encyclopedia. (This tends to emphasize human conceptualizations and vocabulary of course while deemphasizing possible alternative concepts.)

Tuesday, September 1, 2020

AI reading again

 At one time or another I have taught A.s.a. H. much* of Ogden's Simplish (Basic English)**. Rather than reading the internet perhaps A.s.a. should read a good dictionary, grow its vocabulary, and then read a good encyclopedia.*** The whole issue of AI curriculum again.

Humans typically employ a fairly large vocabulary. What can be done with a small vocabulary like Toki Pona and what requires a larger one? Is greater compression simply placing more demands upon context?

* I don't necessarily want to give A.s.a. concepts of church and religion for example.

** What vocabulary an agent needs depends, of course, on its specialization.

*** There are computer programs to translate English into Simplish. I don't know how good they are.

Wednesday, August 19, 2020

Spam filters for A.s.a. H.?

 How much might something similar to spam filtering help to address the problem identified in my blog of 6 June 2020?

Tuesday, August 11, 2020

A.s.a. H., A.r.t., and Python

A.s.a. H. can make use of various clustering algorithms including Grossberg's adaptive resonance theory. I have an ultralight version of A.s.a., using a.r.t., written in Python,  running on Raspberry Pis, which can be carried by and interfaced with mobile robots.  I have a second small program like this but using k means clustering, a third using another learning vector quantization algorithm, and a fourth employing Kohonen's self organizing map (allowing some comparison between different clustering algorithms).

Sunday, August 2, 2020

Making A.s.a. H. more intelligent

In keeping with my blog of 14 July 2020 I have swapped out Raspberry Pi 3Bs for Raspberry Pi 4Bs with 1, 4, and then 8 GB of RAM. This also required an upgrade of the Raspberry Pi OS each time.

Thursday, July 30, 2020

Teaching online

Some things can not be taught online. You can’t teach someone how to swim online. You should not try to teach someone brain surgery online. More generally, for subjects that are really difficult* one should have all of the teaching tools and environments available. This will include the internet but should not be limited to only the internet. Totally online instruction is better than nothing,** but not as good as the real thing.

* Quantum mechanics and relativity, for example.
** And may be used in real emergencies like the covid-19 pandemic.

Friday, July 24, 2020

Tacit knowledge learning and A.s.a. H.

It is argued that the majority of human knowledge is tacit knowledge. A.s.a. learns a substantial number of "subsymbolic patterns*," patterns that are never given names.**

* See my blog of 1 January 2016 for example and the end of my 23 August 2017 blog.
** This may be further enhanced when we give A.s.a. the very austere Toki Pona language.

Thursday, July 23, 2020

An argument against quantum immortality

Tegmark and others suggest that a conscious agent will be immortal in the Everett interpretation of quantum mechanics.* But the argument should work in the -t direction as well so we ought to have  existed from infinitely far in the past. But this does not seem to be the case.

* M Tegmark, Fortsch. Phys., 46: 855-862, 1998.

One-shot learning with A.s.a. H., novelty, and attention

A.s.a. memorizes a new pattern if it is sufficiently different from patterns it already knows. Novelty is the key. The "attention getter." Further observations of similar patterns will fine tune the original memory, however.

Thursday, July 16, 2020

New Mac

Over the last year my old iBook* finally died. I have bought a new MacBook and downloaded the latest versions of QB64 and the Xcode IDE. Testing things out I ran my A.s.a. H. code from my 10 February 2011 blog using QB64 and the code from my 14 May 2012 blog using the Xcode IDE. Both ran fine so I again have A.s.a. running on Windows, Linux, and MacOS/Unix.

* Shown in my blog of 7 January 2012.

Tuesday, July 14, 2020

Modularity and decomposing intelligence

I believe that intelligence has components like those listed in my 23 August 2010 blog. With A.s.a. H.’s architecture long term memory is frequently held in hard drives, flash drives, or even SD cards which can be fairly easily expanded allowing for knowledge growth. It is also possible to add additional (hardware and/or software) modules that perform extrapolation or other algorithms that increase A.s.a.’s creativity. Overall speed can be increased in some cases by use of parallel processing. Other sorts of intelligence can be more difficult to enhance.

Monday, July 13, 2020

Perhaps students should not know their grades too accurately

If the purpose of grades is to make students work harder* then perhaps they should not know too exactly what their grade is at any moment in time. I have seen students who had high grades near the end of a course slack up a bit knowing that it wouldn’t be enough to change their final grade.

* See my blog of 12 October 2010.

Tradeoffs In kinds of intelligence*

Complex domains require that intelligent agents have more knowledge. More memory (knowledge) slows down processing (search). Simple domains favor faster, less knowledgeable agents. Complex domains require more knowledgeable, slower thinking agents. Different environments favor different kinds of intelligence.*

* See my blog of 23 August 2010.

Monday, July 6, 2020

Nouns, verbs, spatial and temporal patterns

Of the words/concepts A.s.a. H. has learned almost all nouns are purely spatial patterns.* One exception is “hardness” which reflects the amount of displacement registered over time as a force is applied. Verbs are more complicated. About half of all verbs are names for temporal patterns. Others like “need,” “listen,” “see,” “obstruct” are not.

* registered at a single time step

Return to school

Before we all return to campus everyone needs to be tested for Covid -19. Will we be?

Monday, June 22, 2020

AI adolescence?

I have argued that we humans spend too much of our lives in childhood and adolescence and too little of it as productive adults.* I had thought that AIs would not have this issue. But perhaps AIs also need a controlled adolescent phase in order to overcome the problem discussed in my blog of 6 June 2020.

* See, for example, my blog of 15 Oct. 2010.

Thursday, June 18, 2020


The concepts that A.s.a. H. acquires from its sensations and actions* support empiricism. The innate concepts that A.s.a. H. has do not.** If I restrict attention to the concepts defined in the Toki Pona language then about 1/6 of all the concepts are innate.

* See, for example, my blogs of 1 Oct 2015 and 5 Nov 2015.
** See, for example, my blogs of 23 Jan 2013, 19 Oct 2015, 21 Feb 2020, and 27 Feb 2020.

Wednesday, June 10, 2020


In utility theory (and elsewhere) transitivity is assumed. If A is preferred to B and B is preferred to C then A must be preferred to C: (A>B) AND (B>C) => (A>C). But I have shown situations* where some team A will always beat team B and team B will always beat team C but team C will always beat team A. I.e., in a game between A and B you should always bet on A, in a game between B and C you should always bet on B, but in a game between A and C you should bet on C.

* See my blog of 20 June 2019.

Saturday, June 6, 2020

The problem with reading

When A.s.a. H. reads it is uncritical. It accepts lies. At least initially. Perhaps it will help if we delay reading until A.s.a. has a larger knowledgebase.
Perhaps we can not simply turn an AI loose reading from the internet and expect it to learn.

Thursday, May 28, 2020


AI programs are frequently criticized for being insufficiently general. ("narrow" AI) But perhaps AIs should be more specialized than humans are. Humans are employed and work in environments that are quite different from the ones which they were adapted to by evolution. Humans are probably too generalized for the specialties they practice in the modern world.

Wednesday, May 20, 2020


Humans, animals, robots will each have a different subjective experience of the SAME environmental stimuli. By “subjective” we then mean that any sensory input is filtered by: sensor nonlinearities and limitations, any preprocessors, the set of learned internal concepts available to be activated, any activation of internal concepts that have previously received  activation, etc. The internally “processed and interpreted” signal is the subjective qualia which will be somewhat different in each creature/agent.

Tuesday, May 19, 2020

Resolving natural language ambiguities

Natural languages suffer from ambiguities.* Each word in Toki Pona has more than one meaning. If sensory inputs are present when A.s.a. hears/sees a word then these may offer sufficient context so as to resolve the ambiguity. When reading Toki Pona A.s.a. will have had various concepts activated by previously input words and sentences. This will also provide some context for subsequent word-sense disambiguation.

* See, for example, Artificial Intelligence: A Modern Approach, 4th edition, Russell and Norvig, Pearson, 2020, page 252.

Friday, May 15, 2020

Reconfigurable robots

Reconfigurable* robots make it easier to prototype and adapt robots for new environments and experiments. About one third of my robots are fully reconfigurable.**

* not self-reconfigurable
** like Lego, Vex, Meccano, Ubtech Jimu, etc.

Saturday, May 9, 2020

A.s.a. H. Reading

A.s.a. H. has a basic understanding of the Toki Pona language. Stories like Where the Wild Things Are have been translated into Toki Pona.* A.s.a. can then read and learn them.** A.s.a. could be taught the 36 dramatic situations in this way.

* Any translation software is acting as a preprocessor, in effect performing dimensionality reduction.

** I’ve been using period or pause to signal the end of a temporal sequence. Is this the “correct”/only/“best” thing to do?

Wednesday, April 22, 2020

Ending lockdown too soon

If the republicans make themselves sick it serves them right. But they’re going to make the rest of us sick along with them.

Thursday, April 16, 2020


I’ve hacked a Blexy amphibious RC toy car to give A.s.a. a small swimming robot.

Saturday, April 11, 2020

The need for compromise

If 49% want one thing while 51% want something else it is not fair for the 51% to get 100% of what they want. (Simple voting) Rather, the 49% should get some of what they want too. Scientific pluralism offers one way of composing compromise actions. (See my blog of 17 August 2012.)

Sunday, March 22, 2020

Grading online courses

Grading is an issue for fully online courses. How do you know who really did the work? I guess you could give oral exams over Skype and demand photo ID. I hate that idea. The issues associated with oral exams are well known.

Friday, March 20, 2020

One approach to NLU with A.s.a. H.

We would like to have AIs that can read texts written in human languages and learn from them. We have given A.s.a. H. the concepts/vocabulary of the Toki Pona artificial language.* There is machine translation software that translates from English to Toki Pona and from Toki Pona to English. If this can be improved it might prove adequate for our purpose.

* See my blog of 1 Oct. 2015.

Thursday, March 19, 2020

A cost of specialization

Valuable concepts learned by one specialist agent can be (and are) passed on to the next generation of agents in that specialty. Such concepts may not be useful to agents of another specialty.* They may even be harmful.

* In terms of actual physical agents I now have 40-50 small robots like those in my blog of 8 Jan. 2018.

Error correction, forgetting, and big data

As the environment changes the concepts we use to describe it must also change. We need to forget some concepts entirely.* (Things like spirits, ghosts, slaves?) In the absence of forgetting concepts are modified by averaging over lots of additional experiences.** I.e., big data.

* See my blog of 28 Oct. 2018.
** See for example my original paper on A.s.a. H, Trans. Kan. Acad. Sci., vol. 109, No. 3/4, 2006.

Tuesday, March 17, 2020

Contemplating online labs

With countries on lockdown over the coronavirus pandemic universities are trying to go entirely online. In thinking about online labs I ask myself the question: Would you want to be operated on by a surgeon who had learned surgery online? Doesn't a science curriculum require something that remains truly "hands on?"

Friday, March 6, 2020

Avoiding big data

Humans are not expected to digest anything like the amount of data that is regularly presented to artificial neural networks. So, if A.s.a. H. is to be anywhere near as intelligent as humans are it should not need to see big data either.

Monday, March 2, 2020

Unintelligent mechanical life

It is generally believed that there was a time when there was life on earth but no intelligent life. We are currently looking for such an ecosystem on Mars or elsewhere in space. Could we have an ecosystem for mechanical life without any artificial intelligences? Could such a system then support a communist style human utopia? (See my blog of  20 January 2020.)

Thursday, February 27, 2020

Innate concepts for an AI

Stanislas Dehaene argues that humans are born with certain innate, genetically hardwired concepts and that to have human level intelligence an AI will also have to have these implanted in it.* There has been a lot of work on face recognition. I have not given A.s.a. H. such a module but certainly could do so. I have used pretrained neural networks as one sort of preprocessor for A.s.a. in order to identify things like letters and numbers. The Google AIY vision kit can recognize more than a thousand common objects. (A.s.a. has the equivalent of "place neurons" that detect gps, beacons, etc.) The AIY voice kit can recognize many common vocal commands. There is much research going on with respect to natural language understanding.

A.s.a. is hierarchical. Low level regularities are learned more quickly than higher level ones. We have also played with adjusting the learning rates differently on different levels of the concept hierarchy.** When we have done some hand coding of concepts this is equivalent to giving A.s.a. innate concepts. We have sometimes given a layer in the hierarchy a two dimensional memory to allow it to create a spatial map or 2-D vision field. A.s.a. has been given an innate sense of time via time stepping and the time dilation algorithm.

A.s.a. records, updates, and employs probabilities, are they sufficient?

A.s.a.'s hierarchically organized concepts are immediately available for reuse in new combinations. I've emphasized the importance of output/actions, prediction, and extrapolation in addition to simply passively learning sensory input patterns.

A.s.a. may be more comparable to a society of humans rather than one single person.*** Agents can specialize, helping to deal with the combinatorial explosion.**** Various agents can compete against each other in each generation. A.s.a. really can multitask even if individual humans can not.

I have been continuously working on attention mechanisms. How should error correction be propagated between layers of the concept hierarchy? What should a good object concept include? Can consolidation of learning be equated with a society training a specialist agent or is more needed?

* See, for example, How We Learn, Viking, 2020. (Something of a counter argument is in my blog of 21 February 2020.) Dehaene may equate AI to deep learning neural networks and big data, the current fad. There is, of course, a lot more to AI than that.

** And a simulated annealing process.

*** Alternatively, an A.s.a. agent might be likened to one of the specialized regions in a human brain.

**** One sort of attention mechanism.

Monday, February 24, 2020

More evidence for value pluralism

The human brain makes use of multiple neurotransmitters: acetylcholine, dopamine, serotonin. While the dopamine circuit attempts to detect "good" and "bad" or "like" and "dislike" acetylcholine signals something more like "important" versus "unimportant." Vector values again.

Friday, February 21, 2020

Innate concepts

As a result of millions of years of evolutionary history the newborn human brain appears to have innate, genetically hardwired concepts of objects, numbers, probabilities, faces, language, etc.* These are a result of adaption to the specific environments that we and our animal ancestors encountered. They may not be ideal for environments we will face in the future. They may not tell us much about Kant's "thing in itself." I can give A.s.a. H. these same concepts, but should I?** I don't want my AI to BE human. The boundaries of human intelligence are partly an accident of evolutionary history. With A.s.a. I want to expand those boundaries not retain them.

* See, for example, Stanislas Dahaene, How We Learn, Viking, 2020.
** For example, number neurons that activate when they see 1 thing, or 2 things, or 3 things...

Sunday, February 16, 2020

Another very simple specialist agent

A.s.a. H. learns that collisions are to be avoided since they may cause damage. Since clutter is seen to promote collisions A.s.a. evolves a specialist to clear clutter. The algorithm for this agent is very similar to that for a toy sumo robot except that the A.s.a. agent knows to give up and move on if the obstacle proves to be immovable.

Wednesday, February 12, 2020

Alternative flyer

A small quadcopter suspended from a balloon and with an instrumentation package suspended in turn below the drone. The assembly has slightly negative buoyancy. A tether can connect the instrumentation to a computer and trickle charge the drone’s battery at any time. This flyer maneuvers slowly which is an advantage for A.s.a.

Thursday, February 6, 2020


I am hacking a DSstyles sky walker drone in order to give the A.s.a. H. society of agents a small flying robot. This particular drone is encaged which greatly simplifies repeated takeoffs and landings. As a result of having an anemometer and microphone nearby A.s.a. immediately associates "flying" with "wind" and "engine noise" in its concept hierarchy. A.s.a. had already associated larger vertical motions with an atmospheric pressure decrease.

Saturday, February 1, 2020

Evolving robot explorers

The A.s.a. H. society of agents learns to specialize. One of the mobile robotic arms we have available has been used to transport some of the larger sensors; things like Geiger tubes, metal detectors, anemometers, etc. A.s.a. H. learns/creates a specialist “explorer agent” making use of these hardware components and uses it to probe previously unmapped areas.* The program this particular agent learns is relatively simple, mostly data logging and gps and/or beacon signal logging.

* Seeking out things like abundant light for solar panels, moderate temperatures, low clutter environment, etc. in order to maximize utility.

Tuesday, January 28, 2020

A conscious machine

Working within Baars' global workspace theory Barthelmess, Furbach, and Schon argue* that the Hyper reasoning system, with ConceptNet as its knowledge base, is conscious. While I agree with much of this I do believe there are different degrees of consciousness. I have also argued** that consciousness is a collection of processes, not one single thing. Hyper and ConceptNet does not have a notion of self*** nor does it have all 10 of Hobson's "functional components".****

I don't think that consciousness is as difficult as the "hard problem" people would have us believe. On the other hand I don't think that Hyper-ConceptNet is as fully conscious as A.s.a. H. is.*****

The attention issue is part of dealing with the curse of dimensionality. Its a problem that must be faced by any machine trying to operate in a large state space.

* arXiv:2001.09442v1, 26 Jan. 2020
** See my blog of 19 Oct. 2016
*** See Trans. Kansas Academy of Sci., 2017, page 108
**** For example, it seems to lack orientation, emotion, and values.
*****But ConceptNet is a large knowledgebase of almost 3 million axioms in first order logic!

Monday, January 20, 2020

The Communist Utopia

The argument goes something like this:
- Society requires that most of us work.
- But physics tells us that work is energy. “Labor saving appliances” allow us to replace human labor with other energy sources.
- It might be possible to make energy free. Tesla thought that there might be sources of free cosmic energy. Much of his physics was unsound but solar energy is a possible example. Lewis Strauss, the chairman of the atomic energy commission (1954), thought nuclear energy might become “too cheap to meter.” Plentiful thorium or deuterium fuels, for example.
- No one then need work any longer. Machines would replace all human labor. (Today machines are able to do half of all human jobs. But completing the task might involve the creation of  “mechanical life” and the subsequent class struggle between humans and AIs.)

Sunday, January 12, 2020

Vector values

The idea that humans have a vector value system* receives some support from Shalom H. Schwartz's "circular model of values." (see, for example, Journal of Research in Personality, June 2004, pg 230-255)

*A.s.a. H. frequently makes use of a vector value system (see my blog of 19 Feb. 2011) and my criticism of capitalism is based in part on the need to avoid a scalar utility (see my paper  www.robert-w-jones, philosopher, Capitalism is Wrong).

Friday, January 10, 2020

An example of learned attention, attending to

A.s.a. H. learns that (a robot's) collisions correlate with increased pain and damage.
It also learns that sweeping the ultrasonic (obstacle) sensor back and forth correlates with having fewer collisions as compared with having a fixed directed ultrasonic sensor. A.s.a. H. then learns to sweep it's sensor, looking for obstacles and spending more time attending to this particular input channel.

Alternatively, if the robot has a single fixed mounted sensor it may learn to make small repeated left and right turns as it advances forward.

Thursday, January 2, 2020

A kind of intentional thought

Whenever A.s.a. H. learns a case (sequence) this will include any actions that were taken.  Actions need not be the activation of servo motors, they can include things like choosing to perform “thinking with a simulation” (see my 30 May 2019 blog), or adjusting things like time spent extrapolating, doing feature extraction, etc. (e.g., adjusting parameters like L and skip, see my 10 Feb. 2011 blog) See also my book Twelve Papers, pages 15 and 16, self monitoring,

Wednesday, January 1, 2020

Disembodied AI, a complication

Following up on my 1 May 2019 blog I have replaced A.s.a. H.’s lowest layer with human inputs. Unfortunately, some common human inputs need to go to A.s.a.’s second, third, and fourth layers. This complicates learning among other things.

A.s.a. H. learns behavior trees

Colledanchise and Ogden have discussed the advantages of behavior trees in their book Behavior Trees in Robotics and AI: An Introduction (arxiv 1709.00084v3 15 Jan. 2018). Advantages are said to include modularity, hierarchical organization, reusability, and reactivity. A.s.a. H. learns behavior trees similar to that of figure 1.1 from Colledanchise and Ogden’s book:

Pick and place = (Grasp ball -> Carry ball -> Drop ball)
Grasp ball = (detect ball inside grippers -> close grippers -> sense force against grippers)
Carry ball = (sense force against grippers -> move)
Drop ball = (sense force against grippers -> open grippers -> sense no force against grippers)

A.s.a. H.'s devided self

In some experiments I have employed a society of Asa agents. In others a small processor (a LEGO NXT, EV3, Arduino, or Raspberry Pi) rode on each mobile effector (or sensor array). These little brains then were linked (frequently by a power and communication tether) to a larger processor (brain). (Somewhat like in an octopus.) The self concept that A.s.a. H. forms (see, for example, my blogs of  21 July 2016 and 1 January 2017) is then distributed among multiple brains in multiple locations.