Sunday, July 31, 2016

Preventing overheating

I've added temperature > some T critical as a pain component for A.s.a. H.  I.e., as an instinct. (Note, pain is a vector.) See, Instinctive Computing by Y. Cai, Springer, 2016 and my blogs of 1 April 2013 and 28 Nov. 2014.

Saturday, July 23, 2016

Helping to reconceptualize reality

Giving Asa H senses that humans don't have may help it to reconceptualize reality. (See my blog of 8 April 2016 for example.) Giving it effectors and actions that humans don't have might also help. Be it wheeled propulsion, electromagnetic crane, welder arms, laser cutters, or what have you.

Thursday, July 21, 2016


I am working on two publications, one on Asa H as a self-aware computing system and one on reconceptualizing reality.  The one on Asa H is based on the work described in this blog over the last few months of 2015. The work will be presented early next year and the abstract for it is probably in near final form:
I am only just beginning to pull together the work on reconceptualizing reality.  It may be more than a year before this paper is completed. I have a draft of the abstract for it:

Wednesday, July 20, 2016

Scientific pluralism: economics

Two identical learners, observing different example input, or the same examples, but in different order, can form different categories and so judge newer/later input differently.  Reality will come to be conceptualized in different ways. Alternate conceptualizations of reality underscore the need for scientific pluralism. (See, also, R. Jones,  Trans. Kansas Acad. Sci., 2013, pg 78 and my blog of 17 August 2012)
In the science of economics a marketplace is one popular model. In this model competition and individuality are emphasized. But, in keeping with scientific pluralism, we need other models for a better, more complete description of reality.  A family is another good model. In this model cooperation and collectivism is stressed.

Saturday, July 16, 2016

Self-aware computing systems

I have ordered a copy of Peter Lewis, et al's new book Self-aware Computing Systems (Springer, 2016). My artificial intelligence A.s.a. H. Is a self-aware computing system and I want to compare it with what Lewis et al have in mind.

Asa's concept of self (see my blog of 4 March 2015 for one early example) has mostly been learned during interaction with the real world and simulated environments (see chapter 1 of my book Twelve Papers, for some examples) whereas Lewis et al's systems are intended to be mostly hand coded (but would include machine learning components).

Some ways in which Lewis et al's work is similar to Asa are:

Strong ties to and use of parallel processing
Power consumption awareness
Self-tuning resource allocation
Both evolve, adapt, learn

Some ways in which Lewis et al's work differs from Asa are:

Emphasis on specialized (factored) operating system
Asa's use of emergence (Asa learns a model/concept of self rather like infant humans do.)

Monday, July 11, 2016


I am reorganizing my library, dividing it in half.  About 3000 books are my research library.  Another 3000 books are a general purpose library.  I don't have a lot of space so about half the books are paper and about half are electronic.  I prefer to read paper but you can more easily search electronic.

Friday, July 8, 2016


Following all the hype I have downloaded a copy of Google's TensorFlow software and ordered a book on the subject.  Although TensorFlow lets you do linear regression, clustering, and create neural networks (for example) I have not found anything that you can do with TensorFlow that I can't already do with my other software packages.

Wednesday, July 6, 2016

Semantic computing

Using the sensors and actuators described in my blog of 1 Oct. 2015 my artificial intelligence A.s.a. H. Is able to do semantic computing, understanding the meaning of things like sound, color, temperature, acceleration, force, hard and soft, taste, etc.

Tuesday, July 5, 2016

Oppositional concepts

In structuralism (binary) opposition is seen as fundamental to human thought. See Oppositional Concepts in Computational Intelligence, by H. R. Tizhoosh, Springer, 2008, Positions, by Jacques Derrida, Univ. Chicago Press, 1982, and the works of Ferdinand de Saussure. The importance and use of oppositional concepts is discussed by various authors in Tizhoosh's book. When my artificial intelligence A.s.a. H. Includes a category's (concept's) logical complement it has this form. (see my paper in Trans. Kansas Acad. of Sci., vol. 109, # 3/4, 2006, equation 3, with the printing error corrected so that it reads Ci* = 1 - Ci = 1 - In.Ini)  (You only need to store Ci, you can generate Ci* from it when needed.) Similarly, when using vector categories (as Asa H does) we can consider the negation of each vector.

Friday, July 1, 2016

How deep should deep learners be?

I have addressed before the question of how deep should our networks be. (See my blog of 14 March 2014 for example.) Asa H, artificial neural networks, semantic networks, or whatever. December of last year Microsoft (Peter Lee) reported work employing a network (ANN) of 152 layers.  This seems excessive.

Features should be defined consistent with "carving up nature at the joints" (as with Plato's Phaedrus). Admittedly Asa H adds a few layers in the form of various preprocessors that it makes use of.