Friday, November 28, 2014

Emotion, fear, reflex

A Lego NXT robot can be given an innate fear of heights using code like:

#define USONIC     IN_1
#define WHEELS    OUT_AC
#define CLIFF_DETECTED (SensorUS(USONIC)>35)
sub AvoidCliff()

I have experimented with 5 mobile robots which were given a fear of heights in order to prevent them from falling down stairs or off of a tabletop.  Of course we can arrange for fear to be modulated by other cognitive processing.

Wednesday, November 26, 2014

Novelty detector/filter

In Asa H one or more similarity measures examines newly input spatial-temporal patterns and either adds them to existing clusters (cases) or records a new (novel) pattern (see blog of 10 Feb. 2011 and 14 May 2012 for code).

Tuesday, November 25, 2014

Turing tests

I have no intention of subjecting Asa H to the Turing test.  Humans suffer from a number of cognitive defects like confirmation bias, anchoring, framing, the focusing illusion, motivated reasoning, false memory, etc.  These could be used to distinguish a human reasoner from an AI.  Perhaps I could give Asa H such defects but I have no desire to do so.

Monday, November 17, 2014

Robot AI

I am not someone who believes that an AI, in order to be intelligent, must be embodied. I just find that it's easier to define some primitives in terms of sensors and their signals.

Friday, November 14, 2014

Multi-mind effect

Selmer Bringsjord, et al, of the Rensselaer AI lab state that "logically untrained individuals cannot solve problems that require context-independent reasoning" but that groups of the same individuals can solve such problems.  They further state that "in decision-making ....using only one representation or one type of reasoning can lead to erroneous conclusions."  This argues again for a society of intelligent agents and for scientific pluralism.

Monday, November 10, 2014

Minimalist programming

I see the human as the weak link in computer programming, so I try to keep things simple for the human.  Consequently, I write much of my code in a subset of BASIC, and a bit of PROLOG when I want to do logic programming. Clearly, some compromises are involved here as there must be.

Thursday, November 6, 2014

Creativity in Asa H 2.0

Asa H discovered and reported to me that (in the context of something like a neural network with feedback or the related models of consciousness)  "feedback narrows view." This was something I did not know.   Much like with oscillators in electronics, feedback narrows bandwidth.  Subsequently, in searching google I find evidence that cortical feedback in the human brain may restrict the receptive field of that cortical cell assembly. (Krupa, et al, Proc. Nat. Acad. Sci., 6 July 1999, pg 8200)

Tuesday, November 4, 2014

Can truth be a compromise?

Yes it can be. A voting classifier might be an example. One model/theory may predict one trajectory for a hurricane while another model predicts a different trajectory.  The average of the two (or more) models may do the best job of predicting the hurricane's actual path. Scientific pluralism again. (see my blog of 17 Aug. 2012)


My general principle is to vote for the furthest left-leaning candidate that has a reasonable chance of winning.  (So I voted for Gore rather than Nader back in 2000.)  I'd like to see a social democratic government in the U.S.  I get a lot of political emails. I contributed to the Obama campaign in 2012 and that seems to have pushed up the number of emails even higher.  In the last few months I have been getting so many that I delete them on sight.  Anything useful they might have to say (like get out the vote efforts) has long since been buried in the requests for money.  They've become self defeating.

Voter turnout is too low. Should citizens simply be required to vote? But with a "none of the above" option on the ballot. (The athenians held that voting was a duty. Today, 22 countries have compulsory voting. 10 of these enforce it.)