Wednesday, February 25, 2015

My externalism

Sometimes I have to go look up what I believe. I have recorded my best arguments on various difficult subjects (some of these here in my blogs).  Like a complex math proof or calculation I've done they are not on the tip of my tongue, something I can just rattle off.

Tuesday, February 24, 2015

CHREST 4

I now have a copy of Gobet and Lane's CHREST 4 cognitive architecture running in my lab. I am interested in how attention works in CHREST, the size of fragments of stimuli that are learned, and how chunks grow incrementally larger as learning continues. These are all related to similar issues in my Asa H architecture.  Having had different prior experiences CHREST extracts different concepts/chunks and models subsequent experiences differently.  It experiences an alternate reality like Asa does.

Monday, February 23, 2015

Retrodiction

It is not surprising to find that having knowledge of final conditions plus knowledge of initial conditions may tell us more than having knowledge of initial conditions alone. In a game of Russian roulette we might have initial conditions at time t0.  We might know that the cylinder was spun, the gun was pointed at the victim's head, and the trigger was pulled.  Given just these initial conditions we have a 5/6 chance of hearing a click and a 1/6 chance of hearing a bang at time t1. (t1 after t0)  But, given the added final condition that we know the victim is dead from gunshot wounds at time t2 (t2 after t1) we have increased the chance that a bang was recorded back at t1.  No retrocausation is implied.  To have retrocausation we want to be able to control the final condition at t2 (not just have a record of it) and cause a change at t1.

Friday, February 20, 2015

Spiking neural networks

Although it has limited learning capability I was quite impressed by Eliasmith et al's computer model of the human brain, Spaun.  (see Science, vol. 338, 30 Nov. 2012, pg 1202 for example)  In order to play with spiking neurons myself I downloaded a copy of Carnevale and Hines' NEURON 7.3 simulation environment.  I have this software up and running but need to order a copy of Carnevale and Hines' book.

Is nothing something?

Martin Heidegger claimed that the most fundamental question of philosophy was why is there something rather than nothing.  But perhaps nothing is something too.  Nothing has properties: length, width, depth, duration, permeability, permittivity, etc. and these properties can be measured by our senses or by instruments.  If one then asks why a particular thing has the properties that it has the answer may be that we define the properties we define exactly so as to be able to distinguish things, one from the other.  i.e., to categorize and organize, to describe. (And there is no need for everyone to define the same properties and the same categories. Reality can have alternate descriptions. And we are all free to create categories like unicorns that aren't really observed.) Again, "nothing" would be just another "something."

Thursday, February 19, 2015

Concepts and emergence

A concept valid and useful at one level in the concept hierarchy might not be valid at other levels.  Consider the concept of wetness.  Water is wet.  Hydrogen, oxygen, and atoms in general are not wet.  A useful description at one level in our hierarchy of models may not be valid or useful at other levels.

Asa as a tool for philosophical research

Asa has been (and is being) used as a platform to explore things like:

alternate conceptualizations of reality (see blogs of 31 Dec. 2010 and 22 April 2013)
consciousness (see blogs of 29 June 2011 and 15 Oct. 2014)
free will (see blog of 21 Jan. 2015)
imagination (see blog of 12 Feb. 2015)
values (see blog of 9 Jan. 2015)

Wednesday, February 18, 2015

Error

In pluralistic science we maintain multiple theories of a knowledge domain rather than just some single "best" theory. (see blog of 17 Aug. 2012)    This means that individual errors are often less serious than they are in traditional science since they frequently impact only one of our models and usually not all of them. But we should still work as hard as possible to exclude error. (blog of 2 April 2012)

Monday, February 16, 2015

Weighting features from above and below

I have experimented with weighting input features in Asa H.
Forward/upward weighting:  When a category is active in a layer of the Asa H hierarchy a utility value for that category can be passed up to the next Asa H layer along with that category's current  activity value.  This is one of the input features for the next layer in the hierarchy.  That feature's input activation can be weighted with its accompanying utility value.
Backward/downward weighting: As input features are compared with and activate a category in a given layer this (output) category has, itself, a utility which can be used as a weight for the input features. (Trans. Kan. Acad. Sci., vol 109, no 3/4, pg 160, 2006)
Some other weightings:
Weight a feature according to how often it is seen.
Weight a feature according to how often it changes.
Weight a feature according to some average of the utilities of the categories it occurs in.

Friday, February 13, 2015

Deep space?

Calling flights to lunar distances flights to deep space is highly inaccurate and pretentious.  A reasonable name for the space at lunar distances is cislunar space.  Flights to Mars and other planets are flights in interplanetary space.  Flights well outside the solar system would be flights in interstellar space.  (Voyager is at the edge of interstellar space.)  Flights outside the milky way would be in intergalactic space.  That might be getting us close to deep space.

Thursday, February 12, 2015

Concepts, concept change, imagination

Many concepts are empirically grounded.  Near and far might be defined for Asa by a Lego NXT ultrasonic sensor.  Push and pull might be defined for Asa by a Lego NXT or Vernier force sensor. etc. etc. Other concepts are at least partially nonempirical.  The concept of a unicorn, for example.  Asa may have seen pictures of horses and goats.  The concept of a goat will have a horn as one of its features.  (A feature being a concept stored on the next lower level of the Asa H hierarchy.)  Asa's various learning algorithms include things like vector interpolation, extrapolation, chaining, etc.  Asa may try to combine the features of  a horse with those of a goat, for example, and produce the concept of a unicorn.

Tuesday, February 10, 2015

natural language preprocessor

I am working on a natural language preprocessor, possibly for use with Asa H.  The simplest version forces you to use only words that the AI knows.  It compares each word that is input to a vocabulary listing of all the words the AI understands.  A more complex version of the preprocessor would allow the use of words that are synonyms of the words the AI understands.  This would involve augmenting the AI's vocabulary listing by adding synonyms.  A still more complex preprocessor would search for phrases in the input and compare these with synonym phrases.  Again, the vocabulary listing would be expanded to include the phrases that the AI understands and sets of synonym phrases. A spell checker may also be useful.

Sunday, February 8, 2015

VASIMR

Franklin Chang Diaz's VASIMR plasma rocket engine is very similar to the work I did in 1980 (see I.E.E.E. Transactions on Plasma Science, 10, 8, 1982) except that VASIMR uses ICRH ion heating.  But it seems to me that ICRH would preferentially increase ion motion perpendicular to the magnetic field when what one wants is ion motion parallel to the B field.  I would think that in any case heating the electrons will ultimately accelerate the ions down the plasma potential gradient and out the magnetic nozzle.  So preferential ion heating seems unnecessary anyway.