Landgrebe and Smith have criticized* the utility functions typically used** in defining and measuring artificial intelligence. While I do not agree with their conclusions I do agree that better value systems are required for intelligent agents. Vector value systems. See, for example, my blogs of 19 February 2011 and 21 September 2010.
A.s.a. H. is not subject to the constraints/limitations Landgrede and Smith throw up because it creates its own alphabets and vocabularies based on the patterns it observes in the environment that it is experiencing. Behaviors, causal sequences, etc., are also learned from its environment, are variable, and the process is on-going and open-ended. Asa is basically doing science on its own and revising its models as needed.
* See, for example, An Argument for the Impossibility of Machine Intelligence, arXiv:2111.07765v1, 20 Oct. 2021.
** See, for example, Marcus Hutter, Universal Artificial Intelligence, Springer, 2004, pgs 129-140.
No comments:
Post a Comment