It remains to be seen if transformer neural networks (LLMs) can learn (approximate) the human cognitive architecture. Even if they could, humans are not, themselves, rational* nor do they possess a sound value system**. Noorbakhsh Amiri Golilarz et al have a good discussion*** of some of the things that are missing with LLMs. With my A.s.a. H. architecture I have tried to address the shortcomings Golilarz et al outline. Self-monitoring, allocation of resources, and meta-cognitive awareness are discussed in my "Experiments with Asa H" in my book Twelve Papers.**** The learned case-base (knowledge representation/concept structure/memory) is corrected/repaired, grows, updates, and evolves over time. Deployed on robots Asa is fully embodied (sensorimotor grounding). The hierarchical case-base is distributed across multiple timescales.
* see Predictably Irrational, Dan Ariely, Harper, 2008.
** We are, after all, apes. (Gentle, very modern apes as Erika would say.)
*** Bridging the Gap: Toward Cognitive Autonomy in Artificial Intelligence, arXiv:2512.02280v1, 1 Dec. 2025.
**** Available under "book" at www.robert-w-jones.com
No comments:
Post a Comment