Tuesday, November 12, 2019

Software bugs and AI

Currently there is a lot of interest in using AI to help us find and fix software bugs. On the other hand AI software may, itself, be more prone to bugs as compared to more conventional  software. If we don't fully understand what intelligence is or how it works how can we know if our AI software is buggy? There are even those who believe that human intelligence is, itself, a kludge.

Sunday, November 10, 2019

Arduinos for A.s.a. H. and AI

Employed on the lowest layer of the A.s.a. H. hierarchy Arduinos are adequate for some light preprocessing, postprocessing (like PID control), and for simple reflexes. (Raspberry Pis are suitable for somewhat heavier computing tasks. Arduinos can be plugged into them and the Raspberry Pis used as a next higher layer. See, for example, Beginning Robotics with Raspberry Pi and Arduino, Jeff Cicolani, Apress, 2018) The Arduinos can then also do analog to digital conversions for the Raspberry Pis.

Thursday, November 7, 2019

Plasma processing

Plasma processing and plasma chemistry may benefit from the use of pulsed plasma discharges. Pulsating discharges make possible access to plasma conditions that are not attainable with conventional steady discharges. (R. Jones, Sing. J. Phys., vol. 5, page 27, 1988)

Tuesday, October 22, 2019

Levels of explanation

A.s.a. H. learns causal sequences at various different levels of abstraction in the memory hierarchy. Stephanie Ruphy explains why this may be valuable (Scientific Pluralism Reconsidered, U. Pittsburgh, 2013, especially pages 38-44.)

Monday, October 21, 2019

Lifelong machine learning

As A.s.a. H.'s casebase grows processing (thinking) will slow down unless forgetting (of less valuable cases) can be adjusted to roughly equal the rate at which new cases are learned/added. How could/should this be done?

Thursday, October 17, 2019

Recursive sketches

I have tried a variety of algorithms for A.s.a. H. Different measures of similarity, different means of learning and extrapolating, etc. etc. Ghazi, et al, have employed "recursive sketches" to learn/assemble modules for deep networks* similar to the A.s.a. H. hierarchical memory. Using different algorithms will likely give us different concepts (different categories, a different ontology) unless we are finding "natural kinds." Interesting either way.

*Their algorithms are described in Recursive Sketches for Modular Deep Learning, Thirty-sixth International Conference on Machine Learning, Long Beach, California 2019.

Tuesday, October 15, 2019

The mind shapes the world we experience

Kant argued that the mind imposes categories on the objects of experience and perception must conform to a spatial-temporal shaping.  In the A.s.a. H. light software* spatial shaping is performed by the NM input arrangement and temporal shaping is performed by the steps T and by TMAX. Categories/concepts are formed by the various layers of the hierarchical memory.** At the lowest level of the hierarchy Russell's "sense-data" are input, the data of immediate experience.

* See blogs of 10 February 2011 and 14 May 2012.
** See, for example, blogs of 1 January 2017 and 3 August 2018.