Thursday, February 27, 2020

Innate concepts for an AI

Stanislas Dehaene argues that humans are born with certain innate, genetically hardwired concepts and that to have human level intelligence an AI will also have to have these implanted in it.* There has been a lot of work on face recognition. I have not given A.s.a. H. such a module but certainly could do so. I have used pretrained neural networks as one sort of preprocessor for A.s.a. in order to identify things like letters and numbers. The Google AIY vision kit can recognize more than a thousand common objects. (A.s.a. has the equivalent of "place neurons" that detect gps, beacons, etc.) The AIY voice kit can recognize many common vocal commands. There is much research going on with respect to natural language understanding.

A.s.a. is hierarchical. Low level regularities are learned more quickly than higher level ones. We have also played with adjusting the learning rates differently on different levels of the concept hierarchy.** When we have done some hand coding of concepts this is equivalent to giving A.s.a. innate concepts. We have sometimes given a layer in the hierarchy a two dimensional memory to allow it to create a spatial map or 2-D vision field. A.s.a. has been given an innate sense of time via time stepping and the time dilation algorithm.

A.s.a. records, updates, and employs probabilities, are they sufficient?

A.s.a.'s hierarchically organized concepts are immediately available for reuse in new combinations. I've emphasized the importance of output/actions, prediction, and extrapolation in addition to simply passively learning sensory input patterns.

A.s.a. may be more comparable to a society of humans rather than one single person.*** Agents can specialize, helping to deal with the combinatorial explosion.**** Various agents can compete against each other in each generation. A.s.a. really can multitask even if individual humans can not.

I have been continuously working on attention mechanisms. How should error correction be propagated between layers of the concept hierarchy? What should a good object concept include? Can consolidation of learning be equated with a society training a specialist agent or is more needed?

* See, for example, How We Learn, Viking, 2020. (Something of a counter argument is in my blog of 21 February 2020.) Dehaene may equate AI to deep learning neural networks and big data, the current fad. There is, of course, a lot more to AI than that.

** And a simulated annealing process.

*** Alternatively, an A.s.a. agent might be likened to one of the specialized regions in a human brain.

**** One sort of attention mechanism.

Monday, February 24, 2020

More evidence for value pluralism

The human brain makes use of multiple neurotransmitters: acetylcholine, dopamine, serotonin. While the dopamine circuit attempts to detect "good" and "bad" or "like" and "dislike" acetylcholine signals something more like "important" versus "unimportant." Vector values again.

Friday, February 21, 2020

Innate concepts

As a result of millions of years of evolutionary history the newborn human brain appears to have innate, genetically hardwired concepts of objects, numbers, probabilities, faces, language, etc.* These are a result of adaption to the specific environments that we and our animal ancestors encountered. They may not be ideal for environments we will face in the future. They may not tell us much about Kant's "thing in itself." I can give A.s.a. H. these same concepts, but should I?** I don't want my AI to BE human. The boundaries of human intelligence are partly an accident of evolutionary history. With A.s.a. I want to expand those boundaries not retain them.

* See, for example, Stanislas Dahaene, How We Learn, Viking, 2020.
** For example, number neurons that activate when they see 1 thing, or 2 things, or 3 things...

Sunday, February 16, 2020

Another very simple specialist agent

A.s.a. H. learns that collisions are to be avoided since they may cause damage. Since clutter is seen to promote collisions A.s.a. evolves a specialist to clear clutter. The algorithm for this agent is very similar to that for a toy sumo robot except that the A.s.a. agent knows to give up and move on if the obstacle proves to be immovable.

Wednesday, February 12, 2020

Alternative flyer

A small quadcopter suspended from a balloon and with an instrumentation package suspended in turn below the drone. The assembly has slightly negative buoyancy. A tether can connect the instrumentation to a computer and trickle charge the drone’s battery at any time. This flyer maneuvers slowly which is an advantage for A.s.a.

Thursday, February 6, 2020

Flight

I am hacking a DSstyles sky walker drone in order to give the A.s.a. H. society of agents a small flying robot. This particular drone is encaged which greatly simplifies repeated takeoffs and landings. As a result of having an anemometer and microphone nearby A.s.a. immediately associates "flying" with "wind" and "engine noise" in its concept hierarchy. A.s.a. had already associated larger vertical motions with an atmospheric pressure decrease.

Saturday, February 1, 2020

Evolving robot explorers

The A.s.a. H. society of agents learns to specialize. One of the mobile robotic arms we have available has been used to transport some of the larger sensors; things like Geiger tubes, metal detectors, anemometers, etc. A.s.a. H. learns/creates a specialist “explorer agent” making use of these hardware components and uses it to probe previously unmapped areas.* The program this particular agent learns is relatively simple, mostly data logging and gps and/or beacon signal logging.

* Seeking out things like abundant light for solar panels, moderate temperatures, low clutter environment, etc. in order to maximize utility.