Sunday, May 29, 2022

Values

If an agent needs something he values it. Since a human's needs* differ from those of an AI humans and AIs will have different values. This may likely lead to some conflict between humans and AIs.

* humans need air, water, food, mates, etc. (The problem with money, scalar utility, is it is one dimensional while the world is not.)


Sunday, May 1, 2022

Mental time travel

 A.s.a. H. decomposes its model of the world into concepts and sequences (transition models over the vector state space). In the version of  A.s.a. H. 2.0 light published in my 10 February 2011 blog utility controlled extrapolations can occur around line 3000 of the code. Other learning/extrapolation algorithms have been discussed in various of my publications on A.s.a. H.* With these algorithms A.s.a. rethinks the past and past actions on multiple levels of abstraction over its hierarchical memory trying to improve the expected rewards/utility. Responses to alternative possible environmental conditions are also explored/evaluated. i.e., mental time travel, another piece of machine consciousness.

* See, for example, Trans. Kansas Academy of Science, vol. 109, no. 3-4, pg 159-167, 2006 and vol. 124, no. 1-2 , pg 146, 2021 as well as work with A.s.a. F.

Laminar Computing

I am looking at Steven Grossberg's summary* of his 50 years of cognitive science research, Conscious MIND Resonant BRAIN (Oxford Univ. Press, 2021). Grossberg argues that a (the?) most important feature of an intelligent system is its use of "laminar computing." Grossberg's LAMINART is organized in layers like A.s.a. H.**, i.e., "laminar computing." 

* But you have to go to his original publications for the mathematical details.

** The algorithms Grossberg uses differ from those of my A.s.a. H. (Though I have sometimes used his ART as the clustering algorithm in Asa.)