Wednesday, December 1, 2021

Utility functions

 Landgrebe and Smith have criticized* the utility functions typically used** in defining and measuring artificial intelligence. While I do not agree with their conclusions I do agree that better value systems are required for intelligent agents. Vector value systems. See, for example, my blogs of  19 February 2011 and 21 September 2010.

A.s.a. H. is not subject to the constraints/limitations Landgrede and Smith throw up because it creates its own alphabets and vocabularies based on the patterns it observes in the environment that it is experiencing. Behaviors, causal sequences, etc., are also learned from its environment, are variable, and the process is on-going and open-ended. Asa is basically doing science on its own and revising its models as needed. 

* See, for example, An Argument for the Impossibility of  Machine Intelligence, arXiv:2111.07765v1, 20 Oct. 2021.

** See, for example, Marcus Hutter, Universal Artificial Intelligence, Springer, 2004, pgs 129-140.

Attention

 How much should A.s.a. H. limit  the number of concepts activated and output from one (lower) layer in the hierarchy and input to the next (higher) layer? In humans we have the seven plus or minus two limit on attention (working memory).