Sunday, March 19, 2023

Hard to define concepts

 Even cognitive scientists are not in agreement about what constitutes things like consciousness. As we teach A.s.a. H. language we can see what conceptual structures the various words are labeling.* For example, we find, approximately:

conscious = (sense, think, health)

alive = (sense, act)

self = (sense, think, act, health)

where sense, think, act, and health are, in turn, defined roughly** as described in my blogs of 1 October 2015 and 21 July 2016.

* But one must recall that A.s.a. H. has a dynamic memory so individual memories/concepts can change somewhat as a result of new experiences. See my blog of 1 January 2023. 

** Different experiments have involved various robots having different sensors and servos, etc so these concepts can vary accordingly.

The importance of category formation

Yuan has emphasized the importance of category learning for artificial general intelligence and XAI.* A.s.a. H. employs just such a system.**

* Yang Yuan, A Categorical Framework of General Intelligence, arXiv:2303.04571v1, 8 March 2023.

** See, for example, R. Jones, Trans. Kansas Acad. Sci., Vol. 120, pg. 108, 2017 and my blogs of  1 Oct. 2015, 21 July 2016, 19 Oct. 2016, and 3 Aug. 2018.

Wednesday, March 1, 2023

Spacex Starship needs a launch abort system

 By launching the crew on Orion and rendezvousing with Starship in earth or lunar orbit one can sidestep the problem for now. But someone might be tasked with looking at ejector seats and things like the old M.O.O.S.E.* bail-out from orbit ideas.

* General Electric's Manned Orbital Operations Safety Equipment. See, for example, Teitel, Discover magazine, 12 Oct. 2017 or the wikipedia article on MOOSE.

Black boxes, explanations

 People typically can give brief, simple, explanations (reasons) for their beliefs and actions. These may, in fact, be an approximation to what is really a network of subconscious motivations. 

Sometimes A.s.a. H. takes an action or makes a decision that I don't expect/understand. When that occurs I usually try to explore the hierarchical patterns of activation that lead to this action/decision. I have found that frequently a number of weak activations may have "summed up" to produce the unexpected outcome. (Like when a large number of small weights in a neural network act together to produce some output signal.)