Thursday, December 25, 2014

Friday, December 19, 2014

Asa H with multiple memory systems

Since the beginning of the Asa H project I have employed multiple similarity measures and learning algorithms (often times integrated into one single program).  I am now playing with versions of Asa H that make use of more than one sort of memory/database/knowledge representation simultaneously.

Further use of simple parallel processing with Asa H

The Asa H architecture consists of a hierarchical memory assembled out of clustering modules and feature detectors.  I have done some feature extraction outside of the main Asa H program using an autoassociative neural network trained on the various cases from the Asa H casebase.  The hidden layer of the autoassociator network provides the feature detectors.  These networks require considerable time to train by backpropagation.  A half dozen or so networks can be trained in parallel on individual computers.

Wednesday, December 17, 2014

A chatbot passes a minimal Turing test

Prof. Kevin Warwick of University of Reading reports that on 7 June 2014 following a 5 minute chat a russian chatbot named Eugene Goostman convinced 33% of a panel of judges that it was a 13 year old boy.  This was as much luck as expertise but the scores on these tests, and natural language processing capability in general, have been slowly improving over the years.  I'm told that several online software agents have left internet users thinking that they were real humans.

Thursday, December 11, 2014

Sensors for mobile robots

One of the biggest obstacles in AI robotics is providing adequate sensor arrays.  I have acquired one or more:

ultrasonic sensor
sound sensor
force sensors
EOPD sensor
touch sensors
magnetic field probe
temperature sensors
color sensor
light sensor
current sensor
voltage sensors
digital compass
IR seeker

Most of these are for use with Lego NXT hardware. I have several multiplexers so I can use many of these at once. Some of these sensors are used to define concepts for Asa like north, south, east, and west for example.

Tuesday, December 9, 2014

Intelligences ask questions

It has been suggested that the difference between animal intelligences and human level intelligence is that humans ask questions spontaneously. Many AI programs ask questions but what about the spontaneous asking of questions and what about learning to ask questions?
My AI Asa H learns by observing and copying behaviors that it sees.  It sees agents like myself ask questions.  Under similar circumstances Asa will then also ask questions.
Prairie dogs emit warning calls when they see evidence of a human nearby.  Similarly, at the lowest level of complexity a Lego NXT robot (either real or in simulation) may emit a beep whenever it detects a red ball.  A robot with an Asa H brain could see this and learn the same call. This easily occurs in simulation (though it would be difficult with Lego hardware).
In a multirobot experiment coordination of action may involve a robot asking for help (to push a heavy load for example). See, for example, Multirobot cooperation..., Kolling and Carpin, ICRA 2006, pg 1311.  The primitive call could be understood as a question; "Will you help?" Again, Asa H would learn to ask this same question when it encountered a similar situation.

Thursday, December 4, 2014

Asa H natural language processing

I can now give Asa H a nearly 1000 word vocabulary. Enough for a telegraphic speech capability. These words are associated with concepts that are defined at various different levels in the Asa H hierarchical memory.

Wednesday, December 3, 2014


Ryle noted that good philosophical thinking required a good theory of categories.  Various philosophers in their arguments cite the category errors of their opponents.  But it is important to realize that the boundary of any given category is fuzzy and that categories evolve and change. (Just as all language evolves and changes.)  One of the things that I do in my experiments with Asa H is to follow how Asa's categories are formed and then evolve over time. (Watching thinking as it occurs in Asa's language of thought.) Running Asa H on a set of parallel computers the category activity is typically what is transmitted from one computer to the next, and from one level of the Asa H hierarchy to the next. (See my blog of 26 Aug. 2013 for an example of the simplest way this can be done.)

Monday, December 1, 2014

Do mathematical entities like numbers really exist?

Rather than Platonism (mathematical realism) I believe that numbers (and addition and subtraction and multiplication...) were developed by abstraction from earlier (physical) machinery that once existed in the real world.  Cockshott, et al, (Computation and its limits, Oxford Univ. Press, 2012) describe how this may have occurred on pages 11 through about 27. Some of my experiments with Asa H explore how abstractions are formed. It's easier to follow what's going on inside Asa than it is to understand the inner workings of a neural network program.  It's harder to follow what's going on inside Asa as compared to deductions in an expert system, however.

Higher order mathematical operations can then be composed out of addition, subtraction, and multiplication as is done with computers.

Scientism with values

I have argued in favor of a brand of scientism.  But, since I do not believe science is (or can be) value free, it is a scientism with a system of values.  (see my blogs of 20 Sept. 2013, 25 Oct. 2011 and 1 Sept. 2012)
This, as well as "scientific pluralism" (see my blogs of 8 Sept. 2011, 17 Aug. 2012 and "Changing what science is and how it's done", R. Jones, Trans. Kan. Acad. Sci., 116, 1/2, pg 78, 2013), counters some of the criticisms of the more traditional varieties of scientism. (see for example Scientism, Tom Sorell, Routledge, 1991)

Friday, November 28, 2014

Emotion, fear, reflex

A Lego NXT robot can be given an innate fear of heights using code like:

#define USONIC     IN_1
#define WHEELS    OUT_AC
#define CLIFF_DETECTED (SensorUS(USONIC)>35)
sub AvoidCliff()

I have experimented with 5 mobile robots which were given a fear of heights in order to prevent them from falling down stairs or off of a tabletop.  Of course we can arrange for fear to be modulated by other cognitive processing.

Wednesday, November 26, 2014

Novelty detector/filter

In Asa H one or more similarity measures examines newly input spatial-temporal patterns and either adds them to existing clusters (cases) or records a new (novel) pattern (see blog of 10 Feb. 2011 and 14 May 2012 for code).

Tuesday, November 25, 2014

Turing tests

I have no intention of subjecting Asa H to the Turing test.  Humans suffer from a number of cognitive defects like confirmation bias, anchoring, framing, the focusing illusion, motivated reasoning, false memory, etc.  These could be used to distinguish a human reasoner from an AI.  Perhaps I could give Asa H such defects but I have no desire to do so.

Monday, November 17, 2014

Robot AI

I am not someone who believes that an AI, in order to be intelligent, must be embodied. I just find that it's easier to define some primitives in terms of sensors and their signals.

Friday, November 14, 2014

Multi-mind effect

Selmer Bringsjord, et al, of the Rensselaer AI lab state that "logically untrained individuals cannot solve problems that require context-independent reasoning" but that groups of the same individuals can solve such problems.  They further state that "in decision-making ....using only one representation or one type of reasoning can lead to erroneous conclusions."  This argues again for a society of intelligent agents and for scientific pluralism.

Monday, November 10, 2014

Minimalist programming

I see the human as the weak link in computer programming, so I try to keep things simple for the human.  Consequently, I write much of my code in a subset of BASIC, and a bit of PROLOG when I want to do logic programming. Clearly, some compromises are involved here as there must be.

Thursday, November 6, 2014

Creativity in Asa H 2.0

Asa H discovered and reported to me that (in the context of something like a neural network with feedback or the related models of consciousness)  "feedback narrows view." This was something I did not know.   Much like with oscillators in electronics, feedback narrows bandwidth.  Subsequently, in searching google I find evidence that cortical feedback in the human brain may restrict the receptive field of that cortical cell assembly. (Krupa, et al, Proc. Nat. Acad. Sci., 6 July 1999, pg 8200)

Tuesday, November 4, 2014

Can truth be a compromise?

Yes it can be. A voting classifier might be an example. One model/theory may predict one trajectory for a hurricane while another model predicts a different trajectory.  The average of the two (or more) models may do the best job of predicting the hurricane's actual path. Scientific pluralism again. (see my blog of 17 Aug. 2012)


My general principle is to vote for the furthest left-leaning candidate that has a reasonable chance of winning.  (So I voted for Gore rather than Nader back in 2000.)  I'd like to see a social democratic government in the U.S.  I get a lot of political emails. I contributed to the Obama campaign in 2012 and that seems to have pushed up the number of emails even higher.  In the last few months I have been getting so many that I delete them on sight.  Anything useful they might have to say (like get out the vote efforts) has long since been buried in the requests for money.  They've become self defeating.

Voter turnout is too low. Should citizens simply be required to vote? But with a "none of the above" option on the ballot. (The athenians held that voting was a duty. Today, 22 countries have compulsory voting. 10 of these enforce it.)

Sunday, October 26, 2014

Parallel computing

Parallel computing is especially useful for neural network training.  I have trained some neural nets for as long as a week.  I have trained 3 or 4 of these at one time on 3 or 4 conventional computers.  I have enough computers that I could train at least 10 neural networks at a time, giving a factor of 10 speedup. This is useful for training preprocessor networks, expert neural networks, voting neural network classifiers, modular neural networks, etc.

spatial and temporal patterns

My Asa H 2.0 artificial intelligence receives a stream of inputs and generates two output streams; one composed of physical actions taken in the world and the other a set of models that describe the world Asa finds itself in.  Of all the patterns Asa comes to recognize in its input stream the majority are spatial patterns.  Only a minority of learned input patterns are temporal patterns.  Of all the physical outputs Asa learns the majority are temporal patterns.  Only a minority of the physical outputs learned by Asa are spatial patterns. See also my blog of 22 Sept. 2014.

Friday, October 24, 2014

A danger in religion

If you believe in souls and life after death you may allow yourself to do things that are extremely dangerous for you and for society.

Wednesday, October 22, 2014


I have taught Asa H 2.0 to count (small numbers).

Vector values

In Asa H 2.0 light I typically use pattern length and frequency of pattern occurrence as components of a  case's vector value/utility. (see my blog of 19 Feb. 2011) A pattern's complexity can also be used in measuring its importance.  It is not clear which measure of complexity to use, however, permutation entropy? (Brandt and Pompe, Phys. Rev. Lett., 11 April 2002) Ke and Tong's measure? (Phys. Rev. E, 2008) or what?

Diversity in a society of Asa agents

The various agents can be trained in a wide variety of specialties.  The agents may have different values; one agent may value lifespan more than offspring.  Another agent may value offspring more than lifespan.  One knowledgebase may contain cases that value case length or complexity more.  Another knowledgebase may contain cases that value frequency of pattern recurrence more.  Such diversity will help the society deal with complex time varying environments.  The society of agents will be more capable than a single agent.

Sunday, October 19, 2014

Asa's fuzzy protologic

One sort of proto-logic works by verifying if a subset of symbols is present in a certain set. (Principles of Quantum Artificial Intelligence, Andreas Wichert, World Scientific, 2014, pg 31 ).  The set is represented by a vector which is divided into sub-vectors.   Asa searches for sub-vectors in this way but it is satisfied with an approximate rather than an exact match.

Saturday, October 18, 2014

Collective leadership

Having multiple models of the world is better than having just one (see my blogs of 17 Aug. 2012 and 13 Aug. 2012).  As a consequence diverse groups make better decisions than individuals do (and I dislike traditional  managers, governors, presidents, etc.).  A society of Asa agents may outperform an individual one.

Friday, October 17, 2014

Naming robot body parts

The concepts, "touch left", "touch right", "touch front", and "touch back" have been given to a Lego NXT robot by touching sensors at those locations in association with language input of the respective terms. The more general concept of touch is then learned at the next higher level in the hierarchy.  One can similarly locate and name any number of body parts that have input sensors.  Internal organs like the battery and recharging circuit can be identified with the sensation of "charging", "high charge", and "low charge." Categories like "right" can also be learned at a higher level in the hierarchy by association of experiences with "touch right", "turn right", etc.

Wednesday, October 15, 2014

Possible conscious states in Asa H

There are many different theories of consciousness and I have discussed these before in connection with Asa H and AI in general.  I still believe that there is something to a number of these different theories. I just want to add an additional observation from my work with Asa H.

As Asa receives a stream of input it may identify one or more cases (categories or patterns or sequences) in its case memories that closely match this stream (so far at least).  Any actions that will be triggered are predictions taken from these remembered cases.  In order to reduce search (when one is considering a large, complex case base memory) I sometimes stick with the same active case(s) so long as the degree of match with new input does not degrade too much.  (i.e., I don't do a search through all of the case memory every time a small new amount of input appears.)  The cases which are present in the memory but which are not active and not being searched through at that moment might be considered to be in Asa's "unconscious."  The case(s) that are active, that Asa is following at this moment, might be considered to be what Asa is "conscious" of.  Typically the amount of memory that Asa is conscious of is much smaller than what Asa is unconscious of.  Consciousness here results from the need to reduce search/computational complexity. Any actions that are triggered come from the conscious case(s).

We might choose to keep a maximum of 7-9 cases active.  (But at how many levels in the hierarchy?) The unconscious cases are in a long term memory.  The conscious cases are held active short term. Substantial thought (the Asa extrapolation routine for example) is running subconsciously (and is creative).

Tabula rasa learning again

I'm going to try the following; watch as a layer (in Asa H) learns cases/categories.  If the total number of categories being learned levels off, at that point only, begin to train the next layer up. Repeat for each layer on up.  I am assuming all of the input has been structured, presenting simplest patterns/categories first.  That may be a lot of work.

Tuesday, October 14, 2014


Sometimes we know how to implement the boxes we draw:

But all too often in artificial intelligence/cognitive architecture work the boxes are opaque.  We have little or no idea what to put in them:

With Asa H I hope I have made clear (at least some ways in which) you can implement the boxes. (of my diagram at , cognitive scientist, theory of thought and mind)

 I went on and  offered some actual code in my blogs of 10 Feb. 2011 and 14 May 2012.

Tabula rasa Asa H

When training Asa H starting from an empty case-base I typically present the simplest/smallest things (categories) first, training Asa much like we might teach a human infant.  Perhaps we should also turn off/postpone learning in the higher layers of the Asa hierarchy until the lowest layer(s) have learned the simplest/smaller categories. How long should this delay be for each layer?

Monday, October 6, 2014


The objective of life is to survive, expand, and fill as much of space and time as possible.  Intelligences inherit this goal and values. A vector value would then include the agent's lifespan and the sum of the lifespans of all of the agent's offspring.  A measure of the spread of the offspring over space might also be included in this. During the agent's life it will only be able to estimate its ultimate lifespan.  Even if offspring only have one parent they will learn and evolve during their lifetimes and  only partially reflect the value/utility of their parent. Offspring lifespans will also vary.  The simple scalar utility, U=L(1+N), where L is the agent's lifespan and N is the number of offspring, is then an even cruder estimation.  When cooperative agent societies are considered, diversity, in the form of a wide range of agent specialists, should also be valued.

Thursday, October 2, 2014

Hard-wiring Asa H

It may be possible to speed up Asa H by hard-wiring at least some of its functionality.  As an initial step in this direction I am loading a portion of Asa's case memory onto FPGAs.

Lone wolf

I have tended to work alone.  This is partly  my personality.  I have also been willing to change directions "on a dime."  You can't do that if you work in a group. Neither have I wanted to be confined to working on purely "mainstream" topics.  I've wanted greater freedom than that allows.

Wednesday, October 1, 2014


Some religious thinkers and philosophers take love to be a foundational quantity.  Rather, a biological theory of love would take it to be an emergent quantity, evolved to aid in survival of offspring, family bonding, or societal bonding and cooperation.  It might be one of the primitive drives which form a part of the (imperfect) human value system.

Monday, September 29, 2014

A convolutional Asa H network

By using multiple copies of the Asa H 2.0 code in each layer of the hierarchy and by transferring copies of learned cases between these (code copies) it is then possible to construct a convolutional network out of Asa H.  Convolutional networks have proven to be useful in object recognition, for instance. (see the work of Y. LeCun)

Saturday, September 27, 2014

Thursday, September 25, 2014

Asa H action selection

At any given time various layers of Asa H may predict 1 or more cases that will be active next.  Higher layers make their predictions based upon a longer span of inputs.  (And make a larger number of predictions that reach further into the future.) Some of these predicted cases can involve predicted actions Asa could perform. During the course of experimenting with several hundred Asa H programs I have tried out a number of different action selection algorithms.  With the entire Asa network acting as an evaluation function each of the possible actions can be "tried" (simulated) to see which gives the highest utility. No output action is actually taken during this simulation/evaluation stage.  The simulated action, actions, or non action which gives the highest utility (measured at the top of the network hierarchy) is then selected and is scheduled to be taken 1 or more time steps into the future. (Actions scheduled far enough into the future could subsequently be preempted by further future predictions and evaluations.) To reduce search/complexity one can put a time limit on how far into the future one looks and tries to predict (establish an "horizon").

Wednesday, September 24, 2014

Asa H vs other deep learners

Other deep learners typically have a fixed number of layers, a fixed number of nodes, a fixed number of nodes per given layer, etc.  Geoffrey Hinton's ImageNet, for instance, had 7 layers, 650,000 nodes, total, and a fixed number of nodes in each given layer.  Asa H, on the other hand, adds layers and cases/concepts as it learns, and the number of cases per layer varies with time as Asa learns.

It is an advantage of Asa H over humans that it can add memory and processors as and when it needs them.

Tuesday, September 23, 2014


Immorality appears to be a vector having components like:
1. harmfulness
2. dishonesty
3. disgust

Monday, September 22, 2014

Asa H output

Our experiments with Asa H have involved much more input than output.  Humans have a couple hundred million rods in their eyes and only a few hundred skeletal muscles.  A scientist might be able to read perhaps 100 papers for every one he writes himself.  An intelligence in the world simply gets a lot more input than the output it generates. In the lowest layers of the Asa hierarchy overt action is rare.  I typically mark those few cases that involve action in order to find them easily.
I am also especially interested in concept formation by AIs (how conceptual knowledge is formed from perceptual inputs) and have spent some time studying that. Such concepts are a kind of "output" but most of my robots can only do what Lego NXT servomotors can do. I would like to upgrade this as well as my input sensors.

Thursday, September 18, 2014

Magical thinking

Is the relationship between math and physics descriptive or prescriptive? I view math as a language used to describe what I see and as a theory (or a set of theories) of patterns. At one time people thought that gods had secret names.  If you knew the name you could call up a god to do your bidding. Abracadabra. Open sesame.  Today some people think mathematical "rules" force nature to be some way, behave in some way. I think this is magical thinking.  I take math to be descriptive.  A language. It works well because we created it for this very purpose (e.g. calculus). And I think its still only an approximate description at that.

Wednesday, September 17, 2014

On the nature of thought

Thoughts are causal of course.  Ultimately they can make muscles or servo motors do work in the world.  My theory of thought (see my web site, cognitive scientist, theory of thought and mind) is an attempt to describe the details of this activity; causal changes to some type of memory/recording, physically comparing the contents of  buffers, creating and storing new memory patterns, etc.

Things that exist, what is real, changes to reality

Some things are defined by a list of (measurable) properties/attributes.  Some things are defined by their function/use.  A "chair" for example.  (A rock or a pile of hay might be used as a chair.)
What a thing is may well change over time.  From measuring things like size and shape and mass/weight we have moved on to measuring electric charge and then quantum mechanical "spin." We have new measurements with which we can describe an object. We also find new functions for things.  Water or dirt can now be used to shield us from nuclear radiation.  A computing device that once crunched numbers may now be used to manipulate symbols and conduct social discourse.
If a mind is given greater memory and more processing speed it will likely form more and different categories with which to describe its experiences and its world. It may abstract and compress and generalize less as it  may have less need to do so. In all these ways what is real changes.


(most) animals move.
But which way should they move?
Choosing is the beginning of values and thought/intelligence.
Moving toward something sensed.  a primitive "drive"
Moving away from something sensed. a primitive "aversion"

Tuesday, September 16, 2014

The responsibility of congress

According to the U.S. constitution it is congress' job to make war, not the president's.  This is just another responsibility the republicans are shirking.

Friday, September 12, 2014

Asa H preprocessors

I have used a number of preprocessors with Asa H.  I am currently trying various (data) compressors in this role.

Thursday, September 11, 2014

Health risks

Boxing and (american) football are probably too dangerous.  They probably should be banned.

Asa H as cognitive science

"Recent developments in neuroscientific theory have suggested that cognition is inherently memory-based, where memory is fundamentally associative." (Baxter and Browne, Memory as the substrate of cognition, in Proc. of 10th Inter. Conf. on Epigenetic robotics, 2010) Asa H is just such an associative, memory-based system.

Friday, September 5, 2014

Automatic keyword discovery

Some word may occur N1 times in an entire library of documents.  The same word may appear N2 times in some single document.  Words that have the highest values of the ratio N2/N1 for a given document are the best keywords for that document.  (A stoplist may be used as a preprocessor.) Such a scheme might also be useful for routing input to a collection of specialist AIs.(22 Aug. 2014 blog)  Patterns could be counted as well as traditional words.

Tuesday, September 2, 2014

Sensor upgrades and advanced concept formation

We have taught Asa H some few hundred basic concepts (vocabulary) (see blogs of 14 Feb., 16 Feb., 12 March, 1 April, and 24 May 2013) using the simplest possible sensors (mostly Lego NXT).  We will have to upgrade (see blog of 17 July 2014) in order to learn more complex concepts (like "life", "human", etc.).

Monday, September 1, 2014

Simple cut and paste modular programming

I frequently cut and paste code snippets and modules into new programs I'm writing.  I keep an "excluded variables" list with the code library that these snippets and modules come from.  When new code is added to the library it can not contain variable names or any line numbers that have been used in other library code. (Alternatively, instead of an "excluded variables list," you can search the code library for any variable name or line number that you are wanting to use.) Gluing together modules while writing a new program consists mostly of adding lines of code that equate the (local) variables between modules.

Thursday, August 28, 2014

Distraction and focus of attention again

Each layer of the Asa H hierarchy passes a vector up to the next layer.  Perhaps focus of attention might be obtained in the following way:  calculate an average and standard deviation from all of the vector components (assume all components are positive), keep only those components which are "a couple" of standard deviations above the average, delete all other components, renormalize the vector. Report a vector =0 if no components survive this test. (What number should "a couple" really be? Should it vary?) I plan to try this on Asa H.

Wednesday, August 27, 2014

A separate training phase in Asa H

We can give Asa H distinct training and performance stages by altering thresholds like Th2 (line 75 of my code in the blog of 10 Feb. 2011). A casebase can be recorded while using one value of Th2 (code from the 26 Aug. 2013 blog) and then employed by an agent using a different value of Th2 (and possibly other thresholds).

Friday, August 22, 2014

Specialist AIs

Asa H can be trained in an area of expertise and the resulting casebase/knowledgebase saved to an external drive. (see, for example, my blog of 26 Aug. 2013) I have a 4 terabyte drive for this purpose.  Such specialty knowledge can be organized much like the Dewey decimal system and the standard industrial classification.

Friday, August 15, 2014

The Asa H value hierarchy

The values assigned to Asa H cases may vary from one level in the hierarchy to another. At the lowest level(s) case length and how often the case is seen to recur is valued. (see, for instance, Asa H 2.0 light in my blog of 10 Feb 2011).  At the highest level in the hierarchy agent lifespan and number of offspring (diskcopies)  may be whats most highly valued (see, for instance, my paper, Trans. Kan. Acad. Sci., vol. 109, No. 3/4, 2006 )

Ensemble learning with Asa H

Various Asa H experiments have employed ensemble learning.  Perhaps the simplest averages the output from two or more individual Asa H agents.  These may have different similarity measures for instance or have been trained separately. Ensemble learning is also possible within a single Asa agent.  The N best case matches can be followed, for example, and the output can be generated by voting, averaging, interpolation, or the like.  Weighting of the individual outputs by the degree of case match and case utility can be employed. Again, as a rule groups make better decisions than individuals do.

Thursday, August 14, 2014

Granular computing and Asa H

Asa H can be considered to be a project in granular computing (see, for example, Y. Y. Yao, Proc. 4th Chinese National Conf. on Rough Sets and Soft Comp., 2004) "interpreted as the abstraction, generalization, clustering, levels of abstraction, levels of detail, and so on."

Tuesday, August 12, 2014

Big data and artificial intelligence

It is being suggested that big data may be the key to a strong artificial intelligence (see, for example, AI gets its groove back by Lamont Wood, Computerworld, 14 April 2014).  In the 1980s it was common to hear the claim that "you can't be intelligent without knowing a lot" as a part of the work on knowledge based expert systems. 

Certainly big data may offer an environment in which humans find themselves at a disadvantage again. Currently some environments are easier for humans (natural language conversations for example) while some are easier for computing machinery (pocket calculators for example).

Along these lines over the last couple of years I have been slowly increasing the data flow and flow rate into my various Asa H AI experiments.

Monday, August 11, 2014


A good way to speed up the writing of scientific publications is the use of boilerplate.  People have mixed feelings about this practice.  When I was doing plasma physics boilerplate might include:

1. a diagram of the experimental machine
2. a table of typical operating parameters/conditions
3. a paragraph or two describing the device and its operation
4. a paragraph or two describing the plasma diagnostics used

These would change from one publication to the next only if the device or values really did change or if one could in some way improve the boilerplate.

I have had one or two people criticize this practice as somehow "cheating."  I disagree completely.  If one can refer to an earlier paper to present such information fine, dispense with the boilerplate.  But to the extent that a given publication is to be selfcontained then boilerplate may actually serve as quality control.  (Again, so long as it is kept current.)

Most plasma fusion work will at least have a diagram of the machine and a paragraph describing it.  The cost of such machines is so high that they and their descriptions will not be changing from one paper to the next.

If, for some reason, one were to do the same experiment over and over but with a different fill gas, lets say, each publication might then be much like the one before it.  I know of people who do spectroscopic work (not plasma physics) where this has been common.

I've known a number of scientists who would create a talk/presentation by selecting slides from a collection they had assembled (supplemented by any new results recently obtained).
It is not cheating to work smart.

Saturday, August 9, 2014

Scalar utility

Asa H 2.0 has been run with both scalar and vector utilities.  An example of a scalar utility is Asa H 2.0 light (blog of 11 Feb. 2011).  In that code the utility (of the case) is the total time during which that particular pattern (case) has been observed. (a product of the time duration/length of the case and the number of times the case has occurred)

Wednesday, August 6, 2014

Vector intelligence again

In their paper "Fractionating human intelligence" (Neuron, 19 Dec. 2012) Hampshire, Highfield, and Owen offer evidence that IQ can not be described by a scalar quantity but requires at least 3 (vector) components.

Tuesday, August 5, 2014

Natural language versus mentalese

Natural languages are sequential, we can only say or write one word at a time.  Language of thought (mentalese) is, at least in part, parallel, non-sequential.  The brain is a parallel distributed processor.  Many concepts are activating one another across the brain simultaneously.  Humans must teach one another sequentially, they are using natural language.  Robots could transfer data to one another in parallel.

Symbol grounding

I agree with Werbos' definition of intelligence, "a system to handle all of the calculations from crude inputs through to overt actions in an adaptive way so as to maximize some measure of performance over time" (IEEE Trans. Systems, Mind , and Cybernetics, 1987, pg 7).  A brain is a control system.  In any artificial intelligence all symbols (internal representations) are then surely grounded in so far as they functionally connect sensory and utility/value inputs with outputs/responses. But in deep (complex) networks some symbols will be far removed from primitive/raw perceptions. i.e., more "abstract" concepts.

Monday, August 4, 2014

What thoughts are made up of

In their paper "what thoughts are made of" (in Embodied Grounding, edited by Gun and Smith, Cambridge U. Press, 2008, pg 108) Boroditsky and Prinz detail a view of the nature of thought which is very similar to my own (as it occurs in my artificial intelligence, Asa H).  They also suggest that teaching an agent a natural language (as I have been doing with Asa H) may enhance the agent's level of intelligence.

One difference is that Boroditsky and Prinz discuss representations (of concepts) in terms of feature lists whereas I employ vectors and allow each feature (vector component) to have variable activation.

Friday, August 1, 2014

Repressed memories in Asa H

The performance elements of Asa prefer cases having high utility.  Forgetting/deleting cases with low utility speeds up search.  Additional low utility cases ("repressed" memories) can be retained for use (in a larger "augmented" casebase) by the learning element.  Knowing what NOT to do is useful there.

When a vector utility is employed we prefer to delete cases from the more densely populated regions of the case vector space.  Also, if a case has a single vector component that is high we prefer to retain it.

Monday, July 28, 2014

Tensors in artificial intelligence

The nervous system acts as a state machine, taking an input vector at time t1 and a state vector (memory) at time t1 and generating an output vector and state vector at time t2.  Since the input vector and state vector at t1 are typically not parallel to the output vector and state vector at t2 one is led to consider tensors in order to perform the calculations of output and state at t2 from the input and state at t1.  There has been a limited amount of work along these lines; the papers by Pellionisz and Llinas (for example, Neuroscience, vol. 16, pg 245, 1985) and the PhD thesis of C. P. Dolan, UCLA, 1989.

Asa H upper ontologies

The upper 5 or so layers of the Asa H hierarchy (see my blog of 4 May 2013) typically includes 50 or more concepts in common with the Generalized Upper Model 2.0 (J. A. Bateman, R. Henschel, and F. Rinaldi, 1995 ) including:
UM-thing                                                                            UM-relation
configuration                                                                        attributes
happening                                                                            circumstances
sensing                                                                                 ordering
positioning                                                                            spatial order
naming                                                                                  facing
motion                                                                                  behind

There is a weaker relationship with the CYC upper ontology (D. Lenat, et al) including concepts like:

Friday, July 18, 2014

Curriculum for Asa H 2.0

What subjects should a machine learner be taught before releasing it into the wild? And in what order should they be taught? My current best estimate has been something like:
1. features
2. shapes
3. concrete objects
4. actions
5. alphabet and numerals
6. words and naming
7. counting
8. language/reading
9. abstract objects

Thursday, July 17, 2014

Why is there something rather than nothing?

It may be that a vacuum is unstable, much like the expansion of a de Sitter space in general relativity and the creation of particle-antiparticle pairs out of the vacuum in quantum mechanics. (But not to expect that CURRENT physics tells the whole story/truth.)

Issues with sensor upgrades and Asa

In nature, the brain and intelligence coevolve with the senses and effectors. In humans the visual cortex is a substantial part of the brain. Asa H has been connected to simple LEGO NXT sensors as well as simple visual inputs (see earlier blogs like 12 March 2013, 16 Feb. 2013, 14 Feb. 2013, 13 June 2013). Concepts/semantics grounded in terms of these simple sensory signaling devices may be lost or distorted if/as we try to upgrade to richer sensory systems. 

In humans, some limited reorganization occurs in the brain when sensory input changes (say after loss of an eye or a hand or, conversely, if a child is given reading glasses).  In Asa H some relearning also occurs.  But if large scale improvements are made in, say, Asa's vision system will the previously learned mental concepts be useful?  Or should/must we start learning from scratch with the new sensors in place? Meaning can be very sensitive to the data stream that has been seen (see, for example, pages 381-382 of Kelly's book The Logic of Reliable Inquiry, Oxford, 1996).

Thursday, July 10, 2014


Perfect rationality is impossible (see, for example, Predictably Rational, R. B. McKenzie, Springer, 2010).  My work with Asa H is aimed at producing a mind which is more rational than humans are.

Looking for change

We have experimented with an Asa H in which we do not advance the time step and record input components until an input "changes significantly." (R. Jones, Trans. Kansas Academy Sci., vol. 117, pg 126, 2014)  This can be done by storing and updating a running average of the input (a single component of the input vector OR the input similarity measure, a dot product for example) and a running average of the standard deviation (of the single component OR the similarity measure).
An average over time is involved so we can employ multiple copies of this algorithm, each looking over time windows (intervals) of different length.

Multiple similarity measures

We advocate scientific pluralism for modeling reality. (R. Jones, Trans. Kansas Academy of Sci., vol. 116, pg 78, 2013)  Similarly, in Asa H we can simultaneously employ multiple similarity measures (either in a single agent or spread through a society of agents) each tracking its own best match in the (single or multiple) case base(s) employed and generating a best preferred action sequence.

Saturday, June 21, 2014

Distraction and focus of attention

As Asa H acquires a larger and broader case base memory it tends to attempt to attend to too many things at once.  It may be possible to focus attention by only passing the N most activated concepts (outputs) from each layer of the hierarchy to the next (see my blog of 26 Aug. 2013, lines 1011-1013 of the code).  What value should N have?  Should it be different for different levels?  Should it change as Asa learns more? If so, how should it change?

There is less of an issue for specialized Asa agents.  A generalist supervisor (or network of supervisors) filters input and sends it to the appropriate specialist(s) for action.

The use of the right feature detectors and the right similarity measures should also help.

Monday, June 9, 2014

Conceptual evolution in Asa H

Because of the hierarchical organization of the Asa H memory some concepts/categories evolve (change) substantially and quickly while those at other (higher and/or lower) levels (or the same level) in the hierarchy evolve (change) very little and/or slowly.  "health", "handle", and "house/home" are examples of concepts that we have seen evolve substantially while "direction" and "near" are concepts which we have seen change very little once created.

Sunday, June 8, 2014

Conceptual difficulty

We now know how to give Asa H a vocabulary of more than 400 concepts (see my blogs of 1 April 2013, 12 March 2013 and 1 Feb. 2014) but we have found 1-2 dozen concepts that we have been unable to teach, for example, "privacy", "confidential/secret", "embarrassment", .....

Legal roadblocks to driverless cars?

Will (Google's) driverless cars be any more acceptable to our legal system than medical expert systems? (see 3 Dec. 2013 blog)

Wednesday, June 4, 2014

AI sleep

(REM) sleep is believed to be the brain running "off line," doing cognitive processing while shut off from outside inputs (and output).  Some creative processing is included in this.

Similarly, light versions of Asa H 2.0 suspend I/O while running extrapolation routines (see the code in my blogs of 10 Feb 2011 and 14 May 2012 for one example) and while doing some housekeeping like case sorting/organization and renormalization. 

When Asa is deployed across a large computer network extrapolation (and other creative algorithms) can be run on separate processors and "sleep" can be avoided.

Sunday, June 1, 2014

Asa H natural language processing

I have given Asa H enough natural language capability (see blogs of 4 May, 24 May, 1 April and 12 March 2013) that when it is told in natural language that:

man     is_a     mortal


Bob     is_a     man

it concludes and records that:

Bob     is_a     mortal

Tuesday, May 27, 2014

Student evaluations are spreading

Something like student evaluations are now being used in medical care.  A survey of the patient's evaluation of the quality of their treatment is now in use.  Since patients are not experts in medicine (just like students are not experts in, say, physics or education) their opinions distort (damage) the delivery of healthcare (see William Sonnenberg in the Fall 2013 issue of Keystone Physician) Popularity is not a good measure of quality, in education or in medicine.


Pfeifer and Bongard claim that "intelligence always requires a body." (page 18 of How the Body Shapes the Way We Think, Bradford Books, 2006) While Asa H has used Lego NXT sensors and actuators to help define some concepts (see my blogs of 14 Feb., 16 Feb., 6 March, 12 March, and 1 April 2013) all of these could have been input by humans instead.  While a society of intelligent agents needs to have some contact with the external environment it is not necessary for every individual intelligent agent to have such contact.  A human could serve in place of the AIs input sensors and/or output actuators and as a pre and postprocessor for the AI. (Just as a man can be an expert in astrophysics without ever looking through a telescope himself. Specialization can work.  The proverbial "brain in a vat" can still think, still be intelligent.)

Sunday, May 25, 2014

Autonomous learning from the web

With Asa H's current concept vocabulary (see blogs of  14 Feb. 2013, 16 Feb. 2013, 6 March 2013, 12 March 2013, and 1 April 2013) it can now learn a limited amount from a raw internet feed.  Many concepts Asa acquires are then modified by ongoing learning.  For example, just like a child, Asa might first think all "animals" are "dogs."

Tuesday, May 20, 2014

AIs should not be modeled too closely after humans

Humans, like all other animals, are evolutionary hacks. Humans have vestigial organs like the appendix. Our jaws are too small to accept wisdom teeth.  The wiring from our retinal cells passes in front of the retina. Some of us are born with residual tails.  Males have nipples. Similarly, penguins, though flightless, have hollow bones. Boa constrictors have vestigial hind legs. etc. etc. Our brains and minds are kludges too.  We know of various examples of human irrationality. (See, for example, D. Ariely, Predictably Irrational, Harper Collins, 2008) An AI can be more rational, more intelligent, than humans are. After all, unlike humans, AIs really are intelligently designed.

Thursday, May 15, 2014

Semiotic machinery

A real (Lego NXT) or simulated robot (running Asa H 2.0) can learn sensor-action patterns like "collision":

with the robot moving at time step 1
sensing an object far away at time step 1

with the robot moving at time step 2
sensing an object nearby at time step 2

sense contact at time step 3

If an observer inputs the word "collision" at the same time then Asa H associates this sign with the concept it learns (see my blog of 6 March 2013 and chapter 1 of my book, Twelve Papers,, book).  Asa H has much of the same functionality that Meystel prescribes in his book, Semiotic Modeling and Situation Analysis (AdRem, 1995).

Wednesday, May 7, 2014

Student evaluations of instructors

Tests are a way of getting students to work a bit harder than they otherwise might.  Tests may or may not measure anything about what the student has accomplished in the course. Similarly, student evaluations of instructors are a way to make instructors work harder than they otherwise might.  Student evaluations may or may not measure much about what the instructor accomplished in the course s/he taught. (And there may be better ways to both motivate and evaluate the instructor. see my presentation in Bulletin of the American Physical Society, vol. 40, pg. 968, 1995)

Wednesday, April 30, 2014

How narrow is cognitive science?

Am I doing cognitive science?  Is AI cognitive science? Jim Davies discusses the scope of cognitive science in his paper: The role of artificial intelligence research methods in cognitive science (recent iccm conference).  To my way of thinking cognitive science need not be restricted to the kinds of cognition that occur in animals.  There may well be kinds of cognition that occur in machines but which never happen in animals. You're still studying cognition.

Saturday, April 19, 2014

The right portable computer

I have previously talked about my likes and dislikes with respect to mobile computing devices (blogs of  8 Feb. 2012, 2 March and 23 March 2012).  We currently have an ipad air in a Zagg folio with keyboard.  In effect, a very compact laptop.  This is nearly ideal.  I only wish it had a USB port.

Thursday, April 17, 2014

Vector values again

Like my Asa H we find that IBM's Watson software has a vector value system.  IBM says Watson is measured along 5 key dimensions: broad/open knowledge domain, complex language use, high precision, accurate answers, and high speed.  When they actually plot Watson's performance and chart its progress over time they use a 2-D plot with precision and percentage of questions answered as the 2 dimensions.

Wednesday, April 9, 2014

Twenty years ago today

In looking at some old notes I see that 20 years ago today I was working on 2 AI projects: my semantic network (published in Trans. Kan. Acad. Sci., vol. 102, pg 32, 1999) and a constructive ("growing") neural-network program. (Work on Asa F was to begin about a year later.)

Monday, April 7, 2014

AI better than human again

As a test of Asa H's vision capability (see blog of 13 June 2013) we have input a stream of hand written numerals like those used for postal zip codes.  After training, Asa H could recognize these numerals with a 99.1 % accuracy.  Humans read the same codes with an accuracy of 98.8%.

Some examples of numerals that are difficult to identify:

Saturday, April 5, 2014

Artificial intelligence in use today

At a recent conference (where I presented my work on Asa H 2.0) I was again asked when we would have an artificial intelligence. I replied that there were many AIs in use today. Since Asa is, among other things, an example of deep learning I gave some examples of deep learners that are in use every day: speech recognition  in iphone's Siri and Google's Android smart phone software, Google's photo search software, etc.

Friday, April 4, 2014

Rewriting, reformulating, and problem solving

If a student can't answer a question posed in words they are often advised to reword the question and see if they better understand what is being asked for in the reformulated question.  This advice is also applicable to other forms of knowledge representation like mathematics, diagrams, figures, etc.  When working with an electric circuit diagram, for example, it may be useful to consider lengthening and shortening various wires and moving components and connections around.  This may make it more obvious that several resistors are purely in series or purely in parallel, for example.  In mathematics, if you have a set of equations, rather than eliminating variables it may be useful to rewrite the equations in matrix form and then seek a solution by finding an inverse matrix.

Thursday, April 3, 2014


I am reading Mariam Thalos' book Without Hierarchy (Oxford, 2013). Patterns in nature are seen on large spatial and temporal scales and on small spatial and temporal scales.  Life and intelligence are seen to be present at large scales.  An electron is not alive or intelligent. Science (even physics) is not just about the smallest of things.  PV=NkT is a useful description of behavior at a large scale. The shape of V doesn't matter.  Many different experimental setups might yield exactly the same measurements.

Wednesday, April 2, 2014


I have been looking at MIT's ConceptNet 5.2.  It is intended to do some of the things I was doing with my associative/semantic networks (Trans. Kan. Acad. Sci., vol. 102, pg 32, 1999). My network ran very slowly (in PROLOG) when the number of associations (the database) reached about 3000.
(In some PROLOG interpreters it crashed.) Several times I have thought of rewriting the program in another language to speed it up but I've not spent the time needed to do it.

ConceptNet was given some verbal IQ tests and came out about equal to a human 4 year old.

Tuesday, April 1, 2014

Human values again

The flight 370 story reminds me again that human values are not what they should be and that an AI can (and should) have better values than humans.  We spend large sums on rescue and disaster relief but relatively little on prevention and infrastructure. More money needs to be spent on prevention.

Thursday, March 27, 2014

How life may have begun

One way in which life may have begun from nonliving chemicals is presented at: (Based on the work of Jack Szostak.)

Program reuse

Often times a (AI) code library is used as a source of snippets from which you build new programs (see my blog of 20 Feb. 2014) But sometimes you can reuse whole programs. (either your own or those you've collected from other people) For example, in order to perform an experiment in forgetting I was once able to edit a few lines of  (someone else's) neural network program and get the results I was interested in.

Wednesday, March 26, 2014

The wave function and the nature of reality

I am reading The Wave Function edited by Ney and Albert (Oxford U. Press, 2013).  It is mostly a debate between those who believe in an ontology involving particles and waves in 4 space and those who prefer quantum fields in (a much larger) configuration space (or vectors in a Hilbert space).  Of course one could also believe in BOTH (i.e. www.robert-w-jones, philosopher, quantum mechanical dualism).  And one need not think that all entities in an ontology are all equally "useful" or equally "valid" or equally "true."  (see my blogs of 17 Aug. 2013 and 12 Sept. 2013)  My Asa H artificial intelligence does not make such an assumption about the concepts it forms/evolves.  Ontologies used by other artificial intelligence projects (expert systems) typically contain entities of various degrees of usefulness, validity, truth. Process metaphysics might complicate matters even more.(Process Metaphysics, N. Rescher, SUNY Press, 1996)  Ontologies may have to change over time just as they do in Asa H and in humans.

More on X-ray pulses

I have observed tiny X-ray emitting spots in electron beam-plasma discharges operated at a few kilovolts.

Wednesday, March 19, 2014

X-ray pulses from glow discharges

Alexander Karabut has reported pulses of 1.5 keV X-rays from glow discharges operated at <=0.5 Amps, <=10 Torr, and 500-2500 Volts. (Condensed Matter Nuclear Science, Jean-Paul Biberian, ed., World Scientific, 2006, pg 253). This should not be a surprise.  One can expect dust particles impacted by a 500-2500 Volt electron beam to produce such X-ray pulses.

Saturday, March 15, 2014

Amplified teaching

One hopes that when you teach teachers (as we frequently do at ESU) you will reach a larger number of people.  In my case I try to present things like the nature of knowledge, the nature of science, how to do research, how to keep lab notes, etc. etc.

Friday, March 14, 2014

How many layers should/will Asa H have?

My code samples of Asa H 2.0 light (blogs of 10 Feb. 2011 and 14 May 2012) are for a single layer in the hierarchy.  These layers are then connected one to another by something like the code in my 26 Aug. 2013 blog (That blog assumes layers write to or read from data files. This I've done if I want to keep a record of these calculation steps and the categories being formed. More direct connection is possible, of course, and I've done that too.)
 But how many layers are needed?  I've used as many as at least 9 layers in various experiments to date.  (I may have used more than 9, I'd have to look back and see.  I recall using at least 9 layers on occasion.) The first or second layer might detect edges, for example, the next might detect lines at various orientations, the next might sense corners, the next might detect simple shapes, the next might detect eyes, or other features, then faces, then complete creatures, etc. etc. How deep should deep-learning be?
Similarly, how big a pattern can each layer learn? (For example, what should TMAX be set to in the code from my 10 Feb. 2011 blog?) In many of my experiments TMAX=5 was used.  Possibly TMAX=10 would be better. In more advanced experiments I have sometimes let TMAX grow during a run.
These questions are related to the issue of what curriculum should be provided as training for Asa.  What should an AI learn (ANY AI, any machine learning algorithm), and in what order? This was less of a problem if the AI is intended to operate in a limited domain.  The question becomes more important if the AI is to operate in the "real world."

Monday, March 10, 2014

More iphone issues

In the last 3 months I have had to hard reset my iphone 5c twice.  Both times the battery had been drained overnight while the phone was not in use. (I use the phone for calls, texting, as a watch and pocket calculator, and for light web searching and reading email.)

Thursday, March 6, 2014

Categories and fuzzy causality

Much of our description of the world is in terms of categories, not individuals.  If you see a red light you stop.  If you see a green light you go.  The exact shade of green or red doesn't matter.  It just has to be red.  In a computer anything in the range of 0 to 0.8 Volts is considered a "0".  Anything in the range from 2.0 to 5.3 Volts is considered a "1".  Its the categories that matter.  In quantum theory we talk in terms of the probability that an electron is in the range between x and x + dx.  That's all we can say about its position.  When you measure anything you'd like to have an average and a standard deviation, not just a single measurement.  Reproducibility is then not as simple as one might have thought.  Two measured numbers are "the same" (the same category) if they are within a standard deviation or so of each other. We do not, however, use this as an excuse for being careless or lazy. Students find this difficult.  It's not as simple as they would like it to be.

Proposed US fusion funding cut

The new US federal budget calls for a cut in DOE fusion energy funding of 17.6%.

Friday, February 28, 2014


Recently I was watching Closer to Truth on PBS.  When the question of god comes up my usual reply is simply: "Such extraordinary beliefs would REQUIRE extraordinary evidence." (Carl Sagan)   On Closer to Truth the discussion was centered around the question of could god be outside of space and time.  Since extra dimensions are now quite acceptable in theoretical physics god could exist outside of space and time by existing in one or more such extra dimensions.  A god could have "duration" outside of space-time if one of the extra dimensions were time-like.  A god could have extension outside of space-time if one or more of the extra dimensions were space-like.  It has been suggested that gravity is weak because it leaks out of our space-time into extra dimensions.  Similarly forces might exist whereby a god could act on our space-time from outside.  Be it religion or be it physics: Such extraordinary beliefs would require extraordinary evidence.

Tuesday, February 25, 2014

Corky bot

The students of the ESU "robotics club" are going to try to repair the Corky robot:

Thursday, February 20, 2014

AI code library

In order to speed program development and coding I keep a library of code snippets, generic algorithms, etc. including:

finite state machines
string pattern matchers
cellular automata
speech recognition code
least squares fitting
digital control systems
regression analysis
Markov models
semantic networks
various neural network algorithms
various expert system inference engines
fuzzy logic systems
various learning algorithms
optimization algorithms
interpolation algorithms
other statistical algorithms
natural language systems
interfacing and bot code
genetic algorithms
etc., etc.

Some of these I wrote myself, many others I have collected from books, journals, and the web.

Monday, February 17, 2014

Computer lab 2014

A computer lab isn't a room or a building any more.  Worldwide, there are ~42% tablets, ~33% laptops, and ~25% desktop computers (my own distribution is also close to this).  So ~75% of computers are highly mobile.  Nowadays a "computer lab" is simply a (decentralized) set of hardware, software, and documentation (see my blog of 17 Dec. 2012).  At any given time my computer lab is distributed across 5 or 6 rooms that I'm working in.  The scanner, printer, server, router, modem, etc. are in one fixed location, however.

Wednesday, February 12, 2014

Emergence again?

In capitalism humans have become mere disposable, replaceable machinery (see Marx) and corporations are people (US Supreme Court, 2010).

Thursday, February 6, 2014

AMBR, associative memory-based reasoning

Associative memory-based reasoning is a good four word description of my Asa AI so I recently bought a copy of Alexander Petrov's new book by the same name (Associative memory-based reasoning, A. Petrov, LAP Lambert, 2013).  I also have a copy of Petrov's PhD thesis which is partially about AMBR (A Dynamic Emergent Computational Model of Analogy-making, New Bulgarian Univ., 1998). Asa also does analogies, so that was also of interest.  Where my Asa programs are examples of "numerical AI" Petrov's AMBR programs are a more traditional symbolic/rule-based AI. There doesn't appear to be anything I can directly adapt and use.  Once again two people can be trying to do the same things but in very different ways.


Christof Koch believes that "Consciousness is just brainwide sharing of information that is in the memory buffer." "It is the content of this short-term memory buffer that we become conscious of." "...broadcasting information from this buffer to the rest of the brain is what renders it conscious." (Science, vol. 343, 31 Jan 2014, pg 487)  A blackboard architecture (Blackboard Systems, Iain Craig, Ablex, 1995) would then be a good approach to computer consciousness. (With a rather small blackboard.)  "Deja vu all over again." (see my 29 June 2011 blog)

Happy 50th birthday BASIC

Dartmouth BASIC was first run in early 1964.  Happy 50th birthday BASIC.  Because of its combination of power and extreme simplicity BASIC may well be the best programming language. (But not best for everything.)
See, Why Johnny can't code by David Brin, also,

Saturday, February 1, 2014

Conversational AI

It is quite difficult to give Asa H natural language capabilities. (, book, paper 1)  I have managed to establish a vocabulary of about 200 words/concepts including:

hear     sound     speak    word     temperature     hot     cold     feel     touch     see     light     dark     color    yellow     green     blue     red     black      time     day     night     direction     north     south     acceleration     left       right     front     back     turn     top     rise     bottom     lower     arm     hand     joint     rotate     finger     grasp     hold     drop     body     move     action     fast      slow     stop      start     collision      damage     hard       soft      flexible     range      near       far      distant      side     retreat     push      carry      on     air      wind     open      close      ground      look      work      rest     wait    home      nest     health     hunger     feed     recharge     mouth     person     head     eye     ear     leg     pain     bad     object

(grounded as described in my blogs of 6 and 12 March 2013, 1 April 2013, and 29 Jan. 2014)  This is just sufficient to permit conversations with Asa in simple pidgin english. It is hoped that Asa can then grow its vocabulary naturally from this starting point.

Wednesday, January 29, 2014

Expanding Asa H's ontology

As a part of Asa H's initial ontology (see blogs of 14 and 16 Feb. 2013 and 12 March 2013) I used  some preprocessor modules to recognize (and define) characters (e.g., letters and numerals). I have begun to develop (sometimes "train") similar modules to recognize/define various objects and features.  These modules can either try to recognize an input (frequently an image) obtained at a single instant in time or a sequence of inputs obtained over a (typically short) time period.  Time varying input is needed where an action is being defined.  A human can also serve as the "preprocessor", perhaps on a temporary basis.  Asa H can also ask for human advice if it is unsure of what it's own preprocessor is seeing. Not all of these preprocessors are being added to the same Asa H agent.  Rather, I am developing a number of individual "specialists."

Monday, January 20, 2014

The poor have too little because the rich take too much

Oxfam has just reported that the richest 85 people have as much wealth as half of the world's population combined! (~3,500,000,000 people!)

Friday, January 17, 2014

Advanced radiation protection?

In order to have radiation-free nuclear reactions (the purported LENRs, "cold fusion") it would be necessary that " could fractionate such large MeV quanta into millions or even billions of smaller quanta." (P. L. Hagelstein, Infinite Energy,  issue 112, pg 12, 2013, see also my sci.physics.fusion post of 1 April 2004 and paper in Kansas Sci. Teacher, vol. 7, pg 12, 1990)  If one had such a mechanism it might be even more important for use as general radiation shielding.

Wednesday, January 15, 2014

Explanatory pluralism (and emergence)

"reductionism that privileges a particular level of explanation over others neglects the fact that mechanisms of different scale are most appropriate for explaining different phenomena."  Paul Thagard in Hot Thought,MIT Press, 2006, page 272

The point is, we find patterns in time and in space at various different scales.  One need not describe the mating habits of butterflies in terms of the dynamics of quarks.

Sunday, January 12, 2014

Vector values again

Paul Thagard has done substantial work on coherence/consistency of thought (see his books Hot Thought, 2008 and Coherence in Thought and Action, 2000).While I do not agree with the exact algorithms he advocates (see Iris van Rooij's review "If Hot Coherence is Rational, then How?" for reasons) I do think it is significant that he employs a vector value/utility (having vector components representing 1. how much you like a given idea and 2. how much you believe this idea ).

Tuesday, January 7, 2014

The problem of contingency

The problem of contingency:  "Why this universe and not some other?"

Tegmark considers that "some subset of all mathematical endowed with...physical existence" (M. Tegmark, Annals of Physics, 270, pp1-51, 1998). In later papers Tegmark suggests that ALL mathematical structures have physical existence and then that all finite computable structures have physical existence (Foundations of Physics, 38, pp101-50, 2008).

In my view:

Observing (interacting with) the physical world we learn/record a collection of patterns and procedures (action patterns/sequences). We learn to count,  to gather objects together (form collections), to add objects to a collection (add), to remove objects from a collection (subtract), to compare collection sizes (equate), to divide collections into a number of equal size smaller sets (divide), etc., etc.  Science studies the patterns we see in the world of our experience, the "physical world."

In mathematics we study patterns, both those we see in the world and any patterns that we choose to make up.  We combine elementary patterns, divide them up, recombine, etc.  Some of what we compose is then found to occur in the world, some is not.  (We see horns in the world.  We see ponies in the world.  We combine these to form unicorns.  We don't happen to find unicorns in the world of our experience.)

My artificial intelligence Asa H thinks in this same way.

Wednesday, January 1, 2014

AI dreams

In Asa H sequences from memory are extrapolated and the resulting synthetic sequence is evaluated for its potential usefulness.  Various AI systems may use (mental) simulations (Artificial General Intelligence, Goertzel and Pennachin, eds., Springer, 2007, pg 353) to judge the usefulness of such extrapolations.  This might be considered "dreaming." 

AI sleep

We usually say that AIs don't sleep and count it as an advantage they have over humans.  But this is not entirely true.  There can be (long) periods of time when no outputs (or output change) is warranted from the AI.  A mobile robot may need to remain static when it recharges its batteries (for example my 3 roombas).  There can be extended periods of time during which inputs don't change much (see my blog of 1 March 2013, item number 1).  This can be related to periodic variation of the environment (like night time if visual senses are involved).  These result in a period of low (or zero) activity for the AI.  Of course this may be avoided if a distributed multiagent system is involved.

Even internal activity ("thinking") may sometimes be reduced.  There may be times when the AI is organizing/sorting its memory (perhaps to permit faster memory search at future times). Time may be spent finding and resolving conflicts/contradictions in memory.  In Asa H time is spent normalizing or renormalizing new or modified case vectors (assuming that a dot product similarity measure is going to be used).  Times may also be spent doing things like defragging and garbage collection.

An AI might sleep.

Conscious machines

If Anderson is correct ACT-R is conscious (How can the human mind occur in the physical universe?, John R. Anderson, Oxford U. Press, 2007, pg 244).  I have ACT-R running in my computer lab.  In my blog of 29 June 2011 I have also argued that Asa H may be conscious.

Mechanical life

If Adami is right AVIDA is alive (Introduction to Artificial Life, Christoph Adami, Springer-Verlag, New York, 1998).  I have AVIDA running in my computer lab. In my blog of 19 Oct. 2010 I have also argued that Asa H may be alive.

Heredity versus environment

During simple/brief experiments Asa H's behavior is dominated by heredity, i.e., its innate algorithms plus any case base loaded as input (see my blog of 26 Aug 2013). During longer and more complicated experiments Asa H learns from the complex patterns it sees in the world and its behavior comes to depend more on environmental influences.  In some experiments the new case memory acquired during Asa's operation can exceed the size and complexity of the original Asa H software package.  As Asa lives longer nurture may come to be more important.  Is the same thing occurring in nature as animals live longer?

G.O.F.C.S., good old-fashioned cognitive science

Asa H. is in the tradition of good old-fashioned cognitive science in its use of processes like memory, classification, feature detection, analogy, etc.  It is original in some of the details, like its use of vector value systems for example.