Tuesday, October 30, 2018

We are all scientists

I ran across a quote from T. H. Huxley which I used in an introductory physical science course 30 years ago: "... there is a continuous gradation between the simplest rational act of an individual and the most refined scientific experiment." Science is simply a refinement of the way in which we all think. Doing science is simply being intelligent. See my blog of 1 Sept. 2012.

The pre-eminence of the present

The present seems to be defined by what we are attending* to, the contents of our short term memory. The past seems to be defined by what is stored in our long term memory and is largely fixed.** A.s.a. H. works similarly but there is short term memory on each level of the hierarchy. A sense of a flow of time is associated with the changing contents of short term and long term memories. Long term memory grows.

* hence its pre-eminence
** though forgetting does occur, changing our past. With A.s.a. H. New exemplars get averaged with old cases, modifying the past.

Sunday, October 28, 2018


I still remember my first telephone number, Warwick-8-3122, long since defunct. Since it hasn’t been used in decades, never was used much, and was not used in some important event A.s.a. H. Would have forgotten it easily.

Introductory robotics course

ESU is offering a robotics course next year which set me to thinking about what should be in such a course:

Uses for robots
    image processing?
Computer Interfacing

What could a laboratory component consist of without making it too idiosyncratic?

Sun seeking?
Obstacle avoidance?
Wall hugging?
Line following?
Pick and place?
Recharging station search?
Object search?

Wednesday, October 24, 2018


As of ANDROID 4.1 it became possible to run versions of A.s.a. H. software on light weight, inexpensive, low power, mobile devices. I wanted to be able to use such tablets as onboard processors on A.s.a. H. robots. (See my blog of 1 Jan. 2013 for example.)

Reality, vectors, concepts

For A.s.a. H. concepts are vectors. For humans, too, many of our concepts should be seen to be vectoral. (See my blogs of 20 Oct. 2010, 12 April 2016, 17 Sept. 2014, 1 Sept. 2016, 7 Jan. 2017, 4 Oct. 2016, 1 Jan 2017, 26 Feb. 2012.) Reality, what is "real," is also a vector concept.

Tuesday, October 23, 2018

Robotic simulations and attention

I always suggest that one should do as much with simulations as possible. Simulations* are quicker and cheaper than using real physical robots. You may even be able to take the program you developed on a simulator and use it directly to run a real physical robot.** Simulations oversimplify the issue/problem of attention, however.***

* See, for example, Blankenship and Mishal's Robot Programmer's Bonanza, McGraw Hill, 2008.
** See, for example, RobotBASIC Projects for the Lego NXT, CreateSpace, 2011.
***My blog of 10 March 2016 describes how to improve the situation somewhat.

Wednesday, October 17, 2018

AI embodiment?

With some AI architectures* one could imagine completely replacing the lowest layer or so with humans and the upper layers, the AI(s), would not need to be embodied. I have done some experiments of that sort.
* For example my A.s.a. H., Albus' RCS, Meystel's nested controller, deep ANNs, etc.

Tuesday, October 16, 2018

Machine Consciousness

I have been reading Feinberg and Mallatt's book Consciousness Demystified ( MIT Press, 2018) and comparing their theory of neurobiological naturalism with A.s.a. H.'s operation. Feinberg and Mallatt emphasize the multiple realizability of consciousness.
A.s.a. H. is hierarchically organized. The lowest level in the A.s.a. H. hierarchy can provide rapidly responding reflex arcs. A.s.a. exhibits mental causation, it reacts to its environment in simple ways and in complex ways. 2D memory can store mental images. A.s.a. has exteroceptive sensations from cameras, microphones, odor sensors, etc. Interoceptive sensation comes from accelerometers, proprioception, pain and temperature sensors, battery charge level, etc. Experiences are recorded as cases and sequences of cases. You "know what it is like to be" A.s.a. if you experience the same cases (patterns of experience) that A.s.a. experiences. In order to fully "know what its like to be a" fish you would have to be able to sense electric fields. A.s.a. can do that even if humans can not. Proprioception is performed by Lego motors and other smart servos when they sense/measure their own positioning and motion. Battery charge and pain sensors and thermistors distributed throughout the Lego robotic agents provide affect with somatotopic body mapping. A.s.a. experiences qualia. When a human feels a full stomach or when A.s.a. senses a fully charged battery these are qualia. To "feel" is to represent something with a signal. One set of signals following one pathway become associated with the label "red." A different signal on a different pathway becomes associated with the label "C-sharp." Signals and pathways are private/subjective. Different valences are recognized by the components of  A.s.a.'s vector value system.
Interestingly, Feinberg and Mallatt estimate that all conscious animal brains have a minimum of about 100,000 neurons. (page 80) and "..complex neural hierarchies build mapped representations of different objects in the environment from multiple elaborate senses..." (page 97) This is what A.s.a. H. does as well.

Monday, October 15, 2018

Agent evolution, value divergence

While running an A.s.a. H. society of specialist agents, each with a vector value system,* I found that one group of agents evolved higher and higher  V1 while all other value components remained nearly unchanged. Another group of agents evolved higher and higher  V2 while all their other value components remained nearly unchanged.

* Similar to my blog of 19 Feb. 2011. The vector values of each agent having components V1, V2,...etc.

Saturday, October 13, 2018

Other people contemplate scientific pluralism

“Maybe its not even possible to capture the universe in one easily defined, self-contained form...” “Perhaps the true picture is more like the maps in an atlas, each offering very different kinds of information, each spotty.” R. H. Dijkgraaf, Director, Institute of Advanced Study, Princeton

Tuesday, October 2, 2018

Language, drawing attention to

Words/labels face little competition for attention compared with the many features of objects that are present in, say, visual input. Words then provide (spreading) activation to features/stimuli associated with the named categories. Language may help us with the problem of attention in complex real world environments. Otherwise, A.s.a. May require many learning passes before it averages out stray stimuli.

Monday, October 1, 2018


In The Democracy of Objects (Open Humanities Press, 2011, page 246) Levi Bryant argues that existence is binary, no object is more real than any other. In his object-oriented ontology (OOO) he believes that all entities are on equal ontological footing in a flat ontology. With A.s.a. H. I have argued for (the usefulness of) a hierarchically structured ontology. I have also argued that not all entities are equally real.

My experience with A.s.a. H. Suggests that we should not extrapolate our (mental) concepts too far. I believe that the excessive extrapolation of concepts is the source of some of philosophy's problems.