Monday, June 17, 2019

What is life?

Life isn’t a thing. Life is a process, a complex network of linked co-operating, self-sustaining, and regulating feedback loops.* The exact details, in fact, depend upon (and could vary with) the environment (hardware) the network operates in.

* See Peter Hoffmann, Life’s Ratchet, Basic Books, 2012, especially pages 229-231.

Thursday, June 13, 2019

Another example of vector values

Team A may regularly beat team B. Team B may regularly beat team C, And team C may regularly beat team A. This will make no sense if one tries to rank teams with a scalar value from “best” to “worst.” It does happen, however, due to the ways in which teams happen to “match up.” Suppose (american football) teams are described by four component quantities: passing ability, running ability, pass defense, and run defense. Perhaps team A has a very good passing defense and can run. Team B has a very good passing game and can defend the pass. Team C can run and can defend the run.

Tuesday, June 4, 2019

The problem of evil

Evil (danger) may be necessary. In the way suggested in my blog of 10 March 2017. With too little danger evolution might not produce brains and minds. Minds are an adaptation to a particular range of environmental conditions. In animals and in A.s.a. H. pain is an important feedback signal. See also my blog of 24 April 2019.

Thursday, May 30, 2019

Society of intelligent agents thinking with simulations

Intelligent agents (both biological and artificial/mechanical) are given tasks to perform in various different environments. Knowledge learned in one environment is then available for use in others. With any given A.s.a. H. agent some of these may be real world environments being experienced by its robots while others may be simulations* experienced by simulated robots. The artificial world of the simulation can, in turn, have been learned** by some other*** A.s.a. H. agent(s) and its robots as it acts in the real world.

* like those provided by the RobotBASIC simulator for example
** like a map, perhaps (also, see my blogs of 7 Jan. 2015 and 7 May 2017)
***or possibly the same agent

Monday, May 27, 2019

Task failure, cognitive success

Most of the robots available to A.s.a. H. are quite clumsy and don’t always succeed at the tasks we set out for them*, but the relevant symbol grounding and concept formation is accomplished.

* Finding a charging station in a cluttered environment, docking, and recharging batteries for example.

Friday, May 24, 2019

The emergence of logical thinking in A.s.a. H.

C. Ivan and B. Indurkhya argue* that logical thinking emerges in 3 stages. The first stage finds patterns of association in observations. The second stage learns examples of what occurs if I perform action X and examples of how I can make Y occur. The third stage compiles examples of X being caused by Y and considers what would have likely happened if I had done X in a particular situation. A.s.a. H. does all of these cognitive operations.

* arXiv:1905.09730v1, 23 May 2019

More force and weight sensors

The force sensing whiskers I described in my blog of 16 November 2016 can also be used as fingers on grippers or as feet on walking robots.

Sunday, May 19, 2019

The nature of properties

Paul Busch developed the idea* of an “unsharp reality” whose objects had “unsharp properties.”
Dennis Dicks concluded that properties are perspectival, relational, hyperplane dependent, and are neither monadic nor locally defined.** I disagree that these are “...a move away from classical intuitions.”** On a driver’s license, for example, a person is identified by their height, weight, and eye color. We understand that all of these properties may change for that individual, a very classical notion. And if the dependence of length and mass on motion with respect to the observer required relativistic mechanics the dependence of color with respect to observer motion was already present in the classical Doppler effect.
From Niels Bohr we have that physical quantities (properties) are defined by the experimental arrangement/conditions that produce (respond to) them. I.e., time is what is measured by a clock, magnetic field is what a compass responds to, etc. Asa’s sensors and procedures are what define the properties it knows about.
“Objects” or Lockean empirical substances are, in turn, collections of such properties.

* Recent Developments in Quantum Logic, Mittelstaedt and Stachow, Ed’s., 1985, pg 81-101.
**Quantum Reality, Perspectivalism and Covariance, May 2019, arxiv: 1905.05097.

Wednesday, May 8, 2019

Software libraries

In a perfect world I might keep paper and electronic copies of all of my application programs. In practice I keep a paper copy of each "typical" program, all organized into categories by language:
AI PROLOG code library
AI LISP code library
AI C++ code library
AI BASIC code library
AI misc. code library (includes PYTHON, SCRATCH, EXCEL, etc.)
In each of these categories code is then organized into subcategories like:
Neural networks
Logic programming
Statistical algorithms, etc.
I wish I could do the same thing for electronic copies of these "typical" examples and for "all of the rest" as well. Various issues prevent this. For one thing storage media changes quickly. 8 inch and then 3.5 inch floppy disks. Some formatted for MAC some for PCs. Optical and various hard disk drives. USB and other memory sticks and cards. Backups for each. Electronic copies don't have the same half life that paper copies do. Electronic libraries are more chaotic.

Tuesday, May 7, 2019


For some people the word argument bears a negative connotation. For me there is only argument, argument in favor of this position or argument in favor of that position. There is an excellent essay, The Argumentative Jew, by Leon Wieseltier, in the Jewish Review of Books, Winter 2015. Wieseltier says: "...disagreement is not only real, it is also ideal..." "It is the aspiration of a mentality that is genuinely rigorous and genuinely pluralistic." "...we are not only permitted to make a quarrel, we are obliged to make a quarrel." "We are to learn to live with disagreement..." "pluralism" "...commitment to many-mindedness..." "It is never too late for a rational objection or a logical advance." This is the intellectual tradition that I grew up in, this is the tradition that I embrace.

Friday, May 3, 2019


The philosopher Richard Rorty has said that "To say that a ...given machine has a mind is just to say will pay to think of it as having beliefs and desires."* A.s.a. H. believes, for example, that collisions will result in it experiencing forces and accelerations and damage. A.s.a. H. desires, for example, good health: high battery charge and little pain. By Rorty's definition A.s.a. H. is a mind.

*Contingency, Irony, and Solidarity, Cambridge Univ. Press, 1989, chapter 1.

Wednesday, May 1, 2019

Disembodied AI

If one could replace the bottom most layers of the A.s.a. H. hierarchical memory* with humans  the upper layers might then function as a disembodied AI.** This might fail if there are too many subsymbolic*** concepts. I am experimenting with a version of A.s.a. H. where the lowest layer's inputs are named concepts learned previously by A.s.a****

* See, for example, Trans. Kan. Acad. Sci., vol. 109, # 3/4, 2006, page 160, figure 1 or my book Twelve Papers, chapter 1, figure 1. (, book)
** See my blogs of 4 May 2013, 27 May 2014, and 17 Oct. 2018.
*** unnamed
****See, for example, my blogs of  28 July 2014 and 1 Oct. and 5 Nov., 2015.

Friday, April 26, 2019


Switching over to newer Windows PCs is no big deal. (I do end up having a big pile of old floppy and optical disks lying around.) But switching from BASIC Stamp and Cromemcos to Lego pbricks then to Arduinos and to Raspberry Pis is another matter. Even just the upgrade from Lego NXT to Lego EV3 impacts the software that can be used.*

* RobotBASIC for example, RobotBASIC Projects for the Lego NXT, Blankenship and Mishal, 2011.

Thursday, April 25, 2019

Pain components

Breakage sensors, excessively high or low temperatures, low battery charge,* and high acceleration constitute the standard A.s.a. H. pain components. Other possible pain signals might be: excessive force seen by any of the robots' force sensors, excessive electric currents,** excessively bright light, excessively loud sounds, and excessive acidity seen by a PH sensor. Excessive dust or smoke might also be considered.
One might forgo the current breakage sensors, and the resulting snap together construction*** requirement, if you have enough alternative pain sensors.****

* i.e., hunger pains
** motor stalls for example
*** i.e., Lego, Vex IQ, Velcro, etc.
**** For a given robot preliminary experiments can seek to identify what levels of force and acceleration produce breakage for example.

Wednesday, April 24, 2019


In humans and in A.s.a. H. pain is an important feedback signal, part of our value system. Forgetting is important for learning and death is forgetting taking place at a high level in the memory hierarchy.* Nick Lane argues that death is one of evolution's 10 greatest inventions.**
I've considered the question of immortality in various blogs.*** I had thought that we humans spent too much time in training and too little time working in our "prime." (15 Oct. 2010 blog) But perhaps if we come to live longer we will simply find ourselves attacking even tougher problems and will then have to spend proportionately more time in our "training" phase.

*e.g.,  "Science advances one funeral at a time."
** Life Ascending, Norton, 2009
***15 October 2010, 31 March 2018, 2 and 5 April 2018, and 16 March 2019 for example.

Wednesday, April 17, 2019

Must we abandon classical logic or a single reality?

Frauchiger and Renner argue that "we are forced to give up the view that there is one single reality."*
Now Fortin and Lombardi argue that quantum propositions have a non-Boolean structure and Frauchiger and Renner were wrong to make use of classical logic in their proof. Specifically, they argue that one can not apply the classical inference rule of transitivity of a conditional when dealing with quantum propositions.** I, of course, have considered abandoning both purely classical logic and a single reality. See my blogs of 2 and 4 November 2018, 1 December 2018 and 1 January 2019.

* "Single-world interpretations of quantum theory cannot be self-consistent," 2016
** "Wigner and his many friends: A new no-go result?", 2019

Monday, April 15, 2019

More robotics sets

The "Robotics U" kits from Abilix are another robotics set compatible with A.s.a. H.'s pain system and which can be upgraded using processors like Arduinos, Raspberry Pis,* and the like. (With Arduino and Raspberry Pi there are numerous third party sources for sensors, actuators, software,  hardware, etc.)

* The light version of A.s.a. H. In my 14 May 2012 blog will compile and run as is in gcc and Raspbian. The Arduino IDE will also compile the A.s.a. H. code but you must edit the i/o commands of course. (And, typically, adjust them to each different robot you deploy.)

Saturday, April 6, 2019


Aristotle, the ancient Greeks, and later Christian thinkers felt we should value the virtues: temperance, generosity, magnificence, high-mindedness, controlled anger, friendliness, modesty, humility, chastity, obedience, faith, love, frugality, industry, cleanliness, tranquility, civility, courage, compassion, courteousness, dependability, fairness, honesty, justice, loyalty, and moderation. They felt we should avoid the vices: envy, lust, cruelty, gluttony, anger, covetousness, sloth, greed, selfishness, impulsiveness, insensitivity, and recklessness.* There is not a lot of overlap between these values and typical values that I’ve given A.s.a. H. (See, for example, my blog of 21 September 2010.)

* Nils Ch. Rauhut, Ultimate Questions, 2nd ed., Penguin Academics, 2007, page 250

Thursday, April 4, 2019

Why do philosophy?

Some scientists try to argue that philosophy is a useless waste of time*. It's not that I decide to do philosophy, rather I am led into doing it even when I am trying to do something as practical as engineering.

Any learning system must be able to measure its performance and decide what to change and what not to change ("learn"). It must have a value system. If there are multiple things it must assess then it may need to consider a vector value system. One is forced to do/study axiology.

More advanced learning systems may need to monitor the time spent doing various things like searching memory, comparing quantities, feature extraction, deduction, interpolation, extrapolation, etc. Such systems will be "conscious" of what they do, the times spent on various actions, and any improvements which result.

Even simple Lego servos have built-in rotation sensors for feedback control. They are self-aware in this simple way. Similarly, a robot may need to detect and measure things like wheel slip and damage ("pain").

And when I'm teaching, the students will naturally ask me what the wave function is, and if quantum computers are able to do vastly more processing than classical computers then where is that processing happening? I have to think about how to best answer such questions.

*For example, Neil deGrasse Tyson, see Scientia Salon, 12 May 2014.
Or Steven Hawking, in The Grand Design, Bantam, 2010.

Wednesday, April 3, 2019

More mobile robots

Matt Timmons-Brown has designed a robot using a Raspberry Pi, Lego, and Velcro which is compatible with A.s.a. H. and its pain system. (Learn Robotics with Raspberry Pi, No Starch Press, 2019) As it stands it has no gripper but it does have an interesting vision system that allows it to identify, push, and chase a yellow ball around.

Thursday, March 28, 2019

Another popularity contest?

Ragone et al report* "a low correlation between peer review outcome and impact in time of the accepted contributions". Are measurements of such things accurate?

* On Peer Review in Computer Science, Scientometrics, vol. 97, issue 2, pp 317-356, 2013


Anne Warren argued* that being a person involved:
1. Being conscious of objects, events external and/or internal, and pain.
2. Being able to reason to solve problems.
3. Exhibiting self-motivated activity.
4. Being able to communicate on indefinitely many possible topics.
5. Possessing a self concept, self awareness.
A.s.a. H. does all of these things to at least some degree.

* On the Moral and Legal Status of Abortion, The Monist, 1973.

Monday, March 25, 2019

Price of progress

My A.s.a. project (autonomous software agent) is 24 years old now and A.s.a. H. is nearly 16 years old. Over that period of time some upgrade of hardware and software was essential. Newer and faster computers, more memory, more sensors, better more realistic simulations, a larger casebase, improved pain system, etc. Other changes were more incidental. Migrating between MAC OS and WINDOWS and LINUX, adding LEGO eV3s, Arduinos, and Raspberry Pis, QB64, RobotBASIC, C++.  Each modification takes time and introduces bugs.* Because of such issues I’ve resisted migrating A.s.a. H. To Python.

* Including such things as the Raspberry Pi assuming a UK keyboard so that you have to use the \ symbol to get #!

Thursday, March 21, 2019

Alternate realities: a quantum mechanical version

I have published various arguments in favor of the existence of alternate realities. (Trans. Kansas Acad. Sci., vol. 121, 2018, pg 211 for example.) Proietti, et al, now claim to have performed a Wigner's friend experiment in which "two observers can experience fundamentally different realities." arXiv:1902.05080v1, 13 Feb. 2019

Monday, March 18, 2019

Robot simulator

I’m trying to make my simulations more realistic. Currently I find force and acceleration sensors to present the most problems.*

* The simulator that comes with RobotBASIC has a number of quite reasonable simulated sensors: rFeel, rBumper, rRange, rLook, rBeacon, rCompass, rChargeLevel, rGpsX, rGpsY, rGround

Saturday, March 16, 2019

Personal Identity

The body theories, memory theories, and soul theories all attempt to tie our personal identity to an enduring entity, be it physical or nonphysical. Perhaps that entity is a complex causal sequence/network. A.s.a. H.'s self would be the evolving patterns in its hierarchical memory like those listed in my 21 July 2016 blog. The boundary between myself and the world would be fuzzy. Are my glasses and hearing aids part of me? What about my calculator, smart phone, and computers?
If this definition of self is accurate then uploading yourself into a computer might be possible.
Perhaps a better theory of the self would be a combination of a body theory, a memory theory, and a causal network theory.

Thursday, March 7, 2019

Concept cleaning

I am trying to clean up some of A.s.a. H.'s concepts by manually editing its case base. This isn't easy. The concept "move", as learned by A.s.a., includes the sensation of "sound". A.s.a. has heard its motors/servos running as it moves its body or its manipulators. Should I delete "sound" as a vector component of  "move"? Similarly, should "pain" be a component of "damage" or not? As a compromise I can simply reduce the strength of some vector components but not delete them entirely. But reduce by how much?

Saturday, March 2, 2019


I like RobotBASIC despite some syntax differences as compared with QB64. I like the simulator* and have used RobotBASIC to control LEGO NXT pbricks** and Arduinos.*** Unfortunately A.s.a. H. runs quite slowly in RobotBASIC (as compared with QB64 for example).

* See, for example, Robot Programmer's Bonanza, McGraw Hill, 2008

** See RobotBASIC Projects for the Lego NXT, Blankenship and Mishal, 2011

*** Interfacing the Arduino with a PC Using RobotBASIC's Protocol, Blankenship and Mishal, 2011

Sunday, February 24, 2019

Clean concepts

I am trying to employ a training curriculum for A.s.a. H. wherein important concepts are introduced with minimalist environments and tasks, as free of distractors as possible. I hope this will help with focus of attention issues later on.

Sunday, February 17, 2019

Qualia, subjectivity, and ambiguity

When A.s.a. H. Hears a word it associates that word with its current active case/concept (or cases/concepts), possibly on more than one level in the concept abstraction hierarchy. If the environment and sequence of events is simple the word acquires a fairly unambiguous meaning. A simple robot moving forward in an empty space may learn to “stop”, “speed up”, or “slow down.” (See chapter one of my book Twelve Papers. Available at In a richer environment words acquire more ambiguous meanings.* Meanings will also be subjective since different agents will have learned different concepts prior to associating these with words/names. The physical sensations perceived by different agents will also vary. If yellow light falls onto the eyes of two different humans the exact activations of their “red”, “green”, and “blue” cones will not be identical. The same is true for robotic senses.

* Hand coding and adjustments can help to reduce ambiguities somewhat.

Saturday, February 2, 2019


I know it will only add to the issues I described in my last post but I couldn’t resist trying out the beaglebone black.

Monday, January 21, 2019


All computer programs of significant size have bugs in them. (As do the libraries, compilers, and hardware* they make use of.) I have spent much of the year so far trying to address some issues I ran into while assembling a large (A.s.a. H.) hierarchical case base from pieces learned by different robotic agents, both real and simulated, run at different times, in multiple environments, performing a variety of tasks.

Too many platforms, too many operating systems, too many languages, too many compilers....

* For example, with one of my Raspberry Pis the micro SD card must be plugged in just right. I've also commented previously on issues with LEGO plugs and the like.

Thursday, January 10, 2019

A.s.a. multitasking

We all know that multitasking while driving is a bad idea but most humans feel free to chat with companions while driving. Similarly, a portion of A.s.a. H.'s hierarchical network can be directing a robot to a recharging station* while another portion of the network may be extrapolating, interpolating, planning, etc. on some completely unrelated problems.

* Or transporting a solar array to a sunny location.

More redesign

I am redesigning A.s.a.'s robots trying to address the problem noted in my 13 Dec. 2017 blog. I want to put as many of the pain sensors (and lead wires) as I can inside the robot bodies. I may try to make greater use of my 3 Raspberry Pis while I’m at it.

The human brain has no pain receptors in it. A.s.a., however, can carry thermistors* and accelerometers inside its computer brain.

* Raspberry Pis, for instance, might experience overheating issues.

Tuesday, January 1, 2019

Wall confined plasmas

Classical theory greatly overestimates the confinement of beta > 1 plasmas. Intense particle and heat loss in actual experiments has made it difficult to reach (Tn)/(B*B) =1 in order to even study the wall confinement regime.

Robot controller

Many of my robots have been tethered to computers and/or power supplies. This constrains their operation somewhat. Because of its low cost (80 U.S. $) and large number of ports (39) I have bought an EZ-Robot EZ-B V4/2 WiFi robot controller to try out.* Due to limited funds (equipment) and lab work space I typically have to disassemble a robot or two  before I can build a different one.** The New Years break may give me the time that I need to do that.

* We may still need a tether to a power source for some experiments.
** My blog of 9 Nov. 2018 addresses this issue when only the processor needs to be changed out. Modular robot designs can also help.

An argument for alternative logics

Logic is about the formalization of sound reasoning. Since there are different ways of reasoning* there are different logics.

* Those few who might dispute this would argue for a single deductive reasoning. But that would require starting from a set of absolute and eternal truths and these are not available to us.

Medical AI use

Ulloa, et al, report (arXiv:1811.10553, 27 Nov. 2018) that a deep neural network can predict patient 1-year survival from echocardiograms significantly more accurately than trained human cardiologists.

The development of AI in general might take the form of creating specialist AIs like this and adding them to a growing society of agents.* Some of my work with A.s.a. H. has been along these lines. This is the pattern by which machines have taken over other tasks from humans and draft animals, etc. (i.e. automation)**

* Kiva logistics (warehouse) robots would also be an example.

** And see my blog of 20 Sept. 2018.

A danger in education research?

Learning is, in general, NP hard. But some problems are easy or easier. There may be a temptation to simply use education research to identify the easy subject matter (and methods) and then only teach those topics or those examples. This is likely to prove popular with the student community and administration. We might then end up only teaching classical (e.g. Newtonian) physics, for example, and neglect harder things like relativity and quantum mechanics. The difficult foreign language courses have disappeared from many colleges.

I see no problem with identifying and starting the students off with the easier material. I just want to be sure that we eventually cover important subject matter even when it is difficult.

Divided consciousness

Hilgard explored divided consciousness in humans. (Divided Consciousness, Wiley, 1986) A.s.a. H. thinks about its own thoughts when it extrapolates, interpolates, plans, etc. This, as well as things like attention, intention, and short term memory, is divided up across various levels in the A.s.a. hierarchy.*

My work on consciousness is part of a broad effort to understand and to give AIs adequate attention mechanisms. I’ve considered translating A.s.a. H.’s consciousness** into PROLOG and adding it to rule based expert systems. This would require fibring PROLOG with a temporal logic, however, so as to preserve the time order of various events/processes. (And assumes that the expert system has appropriate sensors, actuators, and operates in a world similar to the one A.s.a. H. was trained in.) I would also have to give the expert system similarity measures.

* Modeling across multiple levels of abstraction is important and was designed into A.s.a. H. from the beginning, T.K.A.S., vol. 109, number 3/4, page 159, 2006. See Stuart Russell in Ford's Architects of Intelligence, Packt, 2018, page 52. See also D. Estrada, Conscious Enactive Computation, arXiv:1812.02578, 7 Dec. 2018.

** Such as in my blog of 1 Jan. 2017.

Alternate realities

In his book Coherence in Thought and Action Paul Thagard explores the relative coherence of materialism, theism, and dualism. (MIT Press, 2000, especially page 119) While I agree with Thagard's general conclusion that materialism is more coherent than theism and dualism I would differ a bit on the specifics of his evidence and explanations. (pages 121-124) I also believe that materialism, dualism, and theism are each sufficiently coherent as to each constitute its own alternate reality, each being accepted by different groups of people.

Muller has argued that our various realities are emergent from a more fundamental first person state space. See arXiv 1712.01816 and 1712.01826. (Much in the way A.s.a. H. Creates its models of the world abstracting away from its first person sense impressions and actions.)

A.s.a. H. as a hierarchical genetic algorithm

One of the original learning methods I used on A.s.a. was the mutation of the strengths of the components of the various case vectors. This was employed on each level of the knowledge hierarchy.