Saturday, January 1, 2022

Bot library

 I keep code libraries in BASIC, Prolog, C/C++, Python, Lisp, Java, etc. In Python, for example, I have sample programs for clustering, neural networks, genetic programming, statistics, logic programming, expert systems, etc. I am putting together a library of robot designs for walkers, grippers, arms, humanoids, etc. Such libraries help to speed up the design of new experiments.

Wednesday, December 1, 2021

Utility functions

 Landgrebe and Smith have criticized* the utility functions typically used** in defining and measuring artificial intelligence. While I do not agree with their conclusions I do agree that better value systems are required for intelligent agents. Vector value systems. See, for example, my blogs of  19 February 2011 and 21 September 2010.

A.s.a. H. is not subject to the constraints/limitations Landgrede and Smith throw up because it creates its own alphabets and vocabularies based on the patterns it observes in the environment that it is experiencing. Behaviors, causal sequences, etc., are also learned from its environment, are variable, and the process is on-going and open-ended. Asa is basically doing science on its own and revising its models as needed. 

* See, for example, An Argument for the Impossibility of  Machine Intelligence, arXiv:2111.07765v1, 20 Oct. 2021.

** See, for example, Marcus Hutter, Universal Artificial Intelligence, Springer, 2004, pgs 129-140.

Attention

 How much should A.s.a. H. limit  the number of concepts activated and output from one (lower) layer in the hierarchy and input to the next (higher) layer? In humans we have the seven plus or minus two limit on attention (working memory).

Thursday, November 18, 2021

Arbitrarily fast computation with arbitrarily slow neurons

 Paul Haider, et al, have neurons guess what they will be doing in the future (specifically, use currently available information to forecast the state of its membrane potential after relaxation), predictive processing.* Similarly, each layer of A.s.a. H. predicts its inputs and outputs at future time steps** and can compare these with future observations.*** A prediction engine. On multiple time scales.

Asa's currently active case supplies predictions for the next expected inputs and outputs. If the current observed inputs are found to be close enough to the predictions then the output actions are taken immediately.

* Latent Equilibrium, 35th Conference on Neural Information Processing Systems, Sydney, Australia, 2021, arXiv:2110.14549v1, 27 Oct. 2021.

** See my blogs of  10 Feb. 2011 and 14 May 2012.

*** See my blog of 1 June 2017.

Monday, November 1, 2021

Compression

Each layer in the A.s.a. H. hierarchy typically does a bit more than two orders of magnitude compression.

Another reliability issue

 I typically use USB sticks to transfer case/concept activation from one level in the A.s.a. H. hierarchy to another on another computer. I do this so that I have a record and can study what A.s.a. is thinking in detail. Last week the computer I was using (my desktop at ESU) recognized the drive but would not read the data file. (Previously it had been working normally.) The data is not corrupted. I took the stick home and my Surface Pro (also running Windows 10) read it fine. I may need to make backup copies every time I do such data transfers. 

Saturday, October 16, 2021

The need for free time

 In my experience creativity requires free time. "Duties must be sufficiently light as to leave the scientist plenty of leisure time for playing and thinking."*

* Discovering, Robert Root-Bernstein, Harvard University Press, 1989, page 398. See also, Noncommissioned Work, Burkus and Oster, Journal of Strategic Leadership, 4(1), 2012, page 48.