Monday, June 17, 2019

What is life?

Life isn’t a thing. Life is a process, a complex network of linked co-operating, self-sustaining, and regulating feedback loops.* The exact details, in fact, depend upon (and could vary with) the environment (hardware) the network operates in.

* See Peter Hoffmann, Life’s Ratchet, Basic Books, 2012, especially pages 229-231.

Thursday, June 13, 2019

Another example of vector values

Team A may regularly beat team B. Team B may regularly beat team C, And team C may regularly beat team A. This will make no sense if one tries to rank teams with a scalar value from “best” to “worst.” It does happen, however, due to the ways in which teams happen to “match up.” Suppose (american football) teams are described by four component quantities: passing ability, running ability, pass defense, and run defense. Perhaps team A has a very good passing defense and can run. Team B has a very good passing game and can defend the pass. Team C can run and can defend the run.

Tuesday, June 4, 2019

The problem of evil

Evil (danger) may be necessary. In the way suggested in my blog of 10 March 2017. With too little danger evolution might not produce brains and minds. Minds are an adaptation to a particular range of environmental conditions. In animals and in A.s.a. H. pain is an important feedback signal. See also my blog of 24 April 2019.

Thursday, May 30, 2019

Society of intelligent agents thinking with simulations

Intelligent agents (both biological and artificial/mechanical) are given tasks to perform in various different environments. Knowledge learned in one environment is then available for use in others. With any given A.s.a. H. agent some of these may be real world environments being experienced by its robots while others may be simulations* experienced by simulated robots. The artificial world of the simulation can, in turn, have been learned** by some other*** A.s.a. H. agent(s) and its robots as it acts in the real world.

* like those provided by the RobotBASIC simulator for example
** like a map, perhaps (also, see my blogs of 7 Jan. 2015 and 7 May 2017)
***or possibly the same agent

Monday, May 27, 2019

Task failure, cognitive success

Most of the robots available to A.s.a. H. are quite clumsy and don’t always succeed at the tasks we set out for them*, but the relevant symbol grounding and concept formation is accomplished.

* Finding a charging station in a cluttered environment, docking, and recharging batteries for example.

Friday, May 24, 2019

The emergence of logical thinking in A.s.a. H.

C. Ivan and B. Indurkhya argue* that logical thinking emerges in 3 stages. The first stage finds patterns of association in observations. The second stage learns examples of what occurs if I perform action X and examples of how I can make Y occur. The third stage compiles examples of X being caused by Y and considers what would have likely happened if I had done X in a particular situation. A.s.a. H. does all of these cognitive operations.

* arXiv:1905.09730v1, 23 May 2019

More force and weight sensors

The force sensing whiskers I described in my blog of 16 November 2016 can also be used as fingers on grippers or as feet on walking robots.