Wednesday, December 4, 2013

AI Languages

There is no longer one single AI language (LISP/SCHEME). If you are doing neural network experiments you likely use C++.  For logic programming you probably use PROLOG.  Production systems/expert systems may be in CLIPS and statistical analysis/hidden Markov models may be in R.  This makes it even harder to combine modules/functionality developed by different research groups.

Tuesday, December 3, 2013

AI, medicine, and the law

Medicine could make extensive use of AI resulting in improved patient outcomes at reduced cost.  This has been blocked for legal reasons.  People want to be able to hold other people accountable for outcomes.  They don't know how to hold an AI accountable.

Sunday, December 1, 2013

External battery packs

I've complained that mobile computing devices have inadequate batteries (see my blog of 2 March 2012).  This is especially true of larger screen devices.  My  Thinkpad, for example, which I use to run Unix (openSolaris), weighs 2.35 kilos, of which only 0.3 kilos is the battery! The new external battery packs (like those for ipads) are a help although the small plugs are not very sturdy and worry me some.  I'd prefer that the auxiliary battery clip securely on to the tablet (or laptop).

Debugging

On the average something like 30% of bug fixes either don't fix the bug or introduce new bugs.  When debugging really complex software does there come a time when you are introducing as many bugs as you fix?

Wednesday, November 13, 2013

Artificial intelligence exists today

There are many competing definitions of Intelligence:

"The ability to use memory,...experience,...reasoning,...in order to solve problems and adapt to new situations."
"The ability to learn,...and make judgments...based on reason."
"Intelligence is a very general mental capability that, among other things, involves the ability to reason, plan, solve problems,...learn quickly, and learn from experience."
"The ability to learn facts and skills and apply them."
"...ability to adapt effectively to the environment, either by making a change in oneself or by changing the environment or finding a new one...a combination of many mental processes..."
"...the general mental ability involved in calculating, reasoning, perceiving relationships and analogies, learning quickly, storing and retrieving information,...classifying, generalizing, and adjusting to new situations."
"Sensation, perception, association, memory,...discrimination,...and reasoning."
"...the process of acquiring, storing in memory, retrieving, combining, comparing..."
"...a cluster of cognitive abilities..."
"Any system...that generates adaptive behaviour..."
"...getting better over time."
"...effectively perceiving,...and responding to the environment."
"The ability to be able to correctly see similarities and differences and recognize things that are identical."

My Asa H 2.0 (and some other AI experiments) meets all of these criteria and is intelligent.  The question then is, how intelligent is it.

Humans and AIs don't occupy the same niche.  They don't eat the same foods, reproduce in the same way, occupy the same habitat. etc.  So asking which is "more intelligent" or "superior" is going to be an approximation at best.  Different people have different levels of intelligence and there are different sorts of intelligence as well so we should be quite happy to have an AI even if it is not as smart as the very smartest human. It can still be useful.  But for those people who seem to be satisfied only with  "human equivalent" AIs I can't help but note that:

Asa H:
         
performs control tasks better than humans
has more short term memory than humans
has more reliable memory than humans
can have more senses than humans
can multiply by diskcopying
can (via telepresence) be in many places at once

AIs have been able to:

handle statistics and probability better than humans can
operate with more consistency than humans
can remotely maneuver helicopters better than human pilots
can create patentable inventions (i.e., Koza's GAs)
can prove math theorems humans have not been able to solve (i.e., the Robbins problem)
can evaluate loan applications and predict student success better than humans can
can plan/schedule transportation problems faster than humans can
can solve arithmetic/accounting problems faster and more accurately than humans

etc.

On the other hand:

Humans currently have a richer set of emotions than AIs do.
Humans are better at natural language than AIs are (though an AI is better at jeopardy than humans are).









Monday, November 11, 2013

Giving Asa H more than 5 senses

While humans have only 5 senses AIs can have more.  With sensors from Lego, HiTechnic, Vernier, Mindsensors, and Measurement computing I have been able to give Asa H (embodied in a Lego NXT robot) sensitivity to light, sound, acceleration, touch(force), color, magnetic field,  temperature, voltage, and current.

Saturday, November 9, 2013

Intelligence and consciousness

When I talk about my AI work I am regularly asked about consciousness.  I usually reply that much of my best work is done by my subconscious.  But an important question still remains: Will an intelligence necessarily exhibit consciousness (at least part of the time)?  In some models/theories of intelligence and consciousness the answer is yes.

In order to handle the partial observability of nature an intelligence will require internal state and feedback loops to maintain/update it (Artificial Intelligence: A Modern Approach, Russell and Norvig, 3rd edition, page 51). Some intelligent activities require loops/feedback, some do not. (see Knowledge Engineering and Management, G. Schreiber, et al, MIT Press, 2000, page 125 for a task list)

One model holds that feedback is the key to consciousness (see my blog of 29 June 2011).  As in Elman and Jordan networks you can see your own actions and some of your thoughts/internal signals/internal state as feedback from the hidden layers.  Within these models intelligence does lead directly to consciousness.

Friday, November 1, 2013

Glue, wrappers, etc.

I much prefer starting with a working application and making small changes, one at a time, testing as I go.  Asa H has evolved in that way.  I recommend the methodology to students.

But AI is a vast field.  Most workers will only develop a single component exhibiting a single functionality.  Assembling a complete AI will then involve glueing these components together.  Over the years I have used a lot of code written by other people.  I don't want to reinvent the wheel.

I find that creating wrappers and software glue is difficult and error prone.  Keeping variable names straight is hard even when I'm dealing only with components that I wrote myself. Even if you use Hungarian notation or some other standardization the next guy doesn't. Components may have been designed and written on different hardware platforms, with different operating systems, different windowing systems, different compilers, with different libraries, etc.  Testing the glue software is also tricky.  I've not seen much published work on these issues and methodologies.

Sunday, October 27, 2013

Reinventing the wheel in every generation?

I just got back from the local AAPT conference (American Association of Physics Teachers).  Again there was some discussion of the modeling method of physics instruction (as promoted by Arizona State Univ.), though perhaps less than in past years.  While I am certianly an advocate of teaching the scientific method I don't think one can expect students to rediscover a substantial fraction of the physical models that we use.  How can a work of decades and centuries be performed in a semester?  Why would one want to?  Doesn't human society pass on its discoveries?  Can't we learn from others?

Classical mechanics was developed by Newton.  Newton was a genius.  My students are not. The average person must be able to learn from those who are smarter; learning things that they could never discover themselves.

Friday, October 25, 2013

Neural networks with multiple training algorithms

I have used a variety of neural network training algorithms; backprop, genetic algorithms, particle swarm methods, etc.  Different algorithms have different advantages and disadvantages.  Would it make sense to switch back and forth between two or more algorithms as training proceeds?  I have often found genetic algorithms to be slower than backprop.  Might one start training with backprop to speed things up and then switch to a genetic algorithm later on to escape from any local minima?

Monday, October 21, 2013

Paper AND electronic

The ideal library would have all holdings in both paper and electronic form.  Paper because of its advantages (Why the brain prefers paper, Ferris Jabr, Scientific American, pg 49, Nov. 2013) and electronic to allow for things like computerized searching and hypertext linking.

Tuesday, October 8, 2013

TinMan commercial software

The TinMan AI Builder software (TinMan Systems) helps you assemble and train modular and hierarchical neural networks.  I am not sure under what conditions this is easier or better than building a single simple three (or more) layer neural network with software like Brainmaker (California Scientific Software) or NeuroSolutions (NeuroDimension), for example.  It would be useful to compare the two side by side using each of their own example projects.

Monday, October 7, 2013

my connectionist AI

Researchers who come from a traditional AI background (symbolic AI, g.o.f.a.i.) tend to avoid connectionist algorithms and view connectionism as competition. Many AI textbooks spend only a small number of pages on neural networks. Typical AI conferences may contain only a few neural network talks/papers. Coming from a physics and numerical computing background I had no such bias.  Early on when I needed a nonlinear multivariable function approximation algorithm for my AI (Asa F 1.0) I was quite happy to try artificial neural network algorithms.  My physics and math backgrounds also made me comfortable with continuous mathematics rather than discrete math (though I soon began to use both, even in one and the same program).  I see connectionism as a useful source of algorithms.

Friday, October 4, 2013

Fusion history

I am reading Search for the Ultimate Energy Source by S. O. Dean (Springer, 2013).  Dean tells how the bureaucrats took control of the direction of the U.S. fusion energy program.  He also tells of the decline of the program.  I would suggest there is a causal relationship there. Europe is now in the lead in magnetic fusion energy research with the ITER and the U.S. effort in inertial confinement fusion just saw the failure of its NIF to reach ignition. I, personally, was unable to get funding for my RSX-IV device (Reactor Studies Experiment) and switched over to artificial intelligence research.  On page 24 Dean, who was head of (plasma) confinement systems for DOE, admits that "In reviewing many proposals over the years, I have observed that it is almost impossible to get a positive review of a proposal to pursue any idea that is not already being worked on in the government's own fusion program."  Clearly a formula for failure. No new ideas are allowed. And this in what is intended to be a research organization.

Wednesday, September 25, 2013

You can't read all of your emails

The Radicati Group found in a 2011 survey that corporate employees received 105 emails per day.  You can't read 'em all.

Bot brain (value system)

This network was learned by a version of Asa H 2.0 and a small mobile robot using a scalar utility only: longevity.  The intermediate concepts (values) are given plausible names.

Friday, September 20, 2013

Can science be value free?

The goal of any intelligence will be to maximize rewards (R. Jones, Skeptic, vol. 12, #3, pg 14, 2006 and the work of P. Werbos and R. Sutton).  This is also true of an intelligence which is doing nothing but science.  Some value system defines and measures the "rewards."  Science can't be value free.  If it were you couldn't decide what theory to believe, what experiment to do next, or even what to think next.  Of course, some of what you value might well be things like logic, evidence, consistency/coherence, etc.

(see also my blogs of 25 Oct. 2011 and 1 Sept. 2012)

Saturday, September 14, 2013

Computer lab software

I have more than a dozen computers in my AI lab (blog of 17 Dec 2012).  An adequate collection of software is just as important as hardware.  At a minimum I have found it's important to have packages for:

neural networks of various types
expert system shells
logic programming systems
genetic algorithm software
finite state machines
clustering software package
Bayesian network package
search software
decision tree software
statistical packages

various compilers/interpreters for languages like:

LISP
PROLOG
OPS-5
CLIPS
C++
BASIC
PYTHON
R
JAVA
etc.

Many of these can be standard commercial packages.  Certainly all of the languages can be. (I try to use commercial packages whenever and wherever possible. It can save a lot of time.)  But some neural networks, finite state machines, decision trees, clustering, search, and statistics code I had to develop myself.  Although less than half of my computers are running Windows more than half of this software is Windows software, probably because there is so much Windows software available out there.

Friday, September 13, 2013

Aided human brains

Lord Martin Rees, president of the Royal Society, has said that understanding how the universe works may not be possible "for unaided human brains."  I agree.  That's one reason we're building AIs.  Some of the first computing machinery was built to help do research in quantum mechanics.  Most simple math has been pushed out of our heads and off scratch paper and into pocket calculators.

Thursday, September 12, 2013

Ontology change

In Asa H categories (concepts, classifiers, case vectors) change continually (R. Jones, Trans. Kansas Acad. Sci., vol. 108, No. 3/4, pg 169, 2005).  Some record patterns in space, some record patterns in time.
In typical knowledge-based AI systems (like, for example, CYC) ontologies are (relatively) fixed.
If process philosophy is correct such change is fundamental (Process Metaphysics, N. Rescher, SUNY press, 1996).
If the block universe view of relativity is correct a static ontology is possible.

work load

If you pile too much work on people they don't work harder.  They don't work smarter.  They look for a new job.

If you pile too much work on students they look for another course, or another major, or another school.

Wednesday, September 11, 2013

A new AI language

It's been a while since anyone has promoted a new major AI language (LISP, PROLOG, OPS-5, CLIPS, etc.).  Pedro Domingos is advocating Alchemy as such an AI language, a "language of thought" for AIs. (see Markov Logic, by Domingos and Lowd, Morgan and Claypool, 2009)  I have downloaded Alchemy (http://alchemy.cs.washington.edu) but have not run the package. The last few AI projects I have studied were all in C++.  I certainly wouldn't want to try to write Asa H in Alchemy, for instance.

Wednesday, September 4, 2013

Intelligence as a set of processes

Intelligence (like life) may be understood as a set of processes.  One then would try to build an AI by adding more and more intelligent processes to your program. Processes that we consider part of human intelligence would include things like: deduction, analogy, comparison, classification, organization, etc.

Thursday, August 29, 2013

Non-standard logics and grammar

Extentions to standard logic attempt to outline structure (organization, order) that we believe we see in the world.
From grammar we identify:  entities, things, objects, places
                                           actions, processes, sequences, state
                                           attributes, features, properties
                                           descriptions, qualifications
From spatial logic we have:  left-right
                                           above-below
                                           front-back
From temporal logic:           earlier-later than
From fuzzy logic:                 approximation

etc.

Wednesday, August 28, 2013

Grammar as (a non-standard) logic

Richard Trench said that "grammar is the logic of speech.." (On the study of words, 1858).  Grammar is sometimes defined as: rules of (correct) sentence composition.  A grade school definition of a sentence is: one complete thought.  Logic is sometimes defined as: rules of (correct) thought (see Boole's book, The Laws of Thought, for instance).

A simple grammar can be cast as an extension of PROLOG using rules like:
s(S,R):- np(S,VP),vp(VP,R).

Similarly, a simple temporal logic might be cast as:
after(X,Z):-after(X,Y),after(Y,Z).

A simple spatial logic might be cast as:
left_of(X,Z):-left_of(X,Y),left_of(Y,Z).

A fuzzy logic might be cast as:
young(X):-X.>=.0,X.<=.35.

etc.

Repetition, propaganda

Asa H values more highly those patterns that it has seen more often (see 10 Feb. and 19 Feb. 2011 blogs).  Humans appear to behave similarly.  This allows propagandists to make their case by simply buying a news outlet and repeating their lies over and over again.  In time the lies are believed.

Monday, August 26, 2013

Asa H 2.0 I/O

When I published examples of Asa H 2.0 light (blogs of 10 and 19 Feb. 2011 and 14 May 2012) I should have shown some alternate ways of doing I/O like:

Saturday, August 17, 2013

Not all ontological entities are created equal

Knowledge is of an approximate character.  Our formalisms abstract and simplify.  Each formalism is an idealization, often times approximating in its own different ways, each formalism offering somewhat different coverage of the domain of interest. Having multiple overlaping theories of a knowledge domain is then better than having just one theory. (see my blog of 17 Aug. 2012 and www.robert-w-jones.com , philosopher, changing what science is, also, the laws of nature are not unique)

Our various theories and the entities they contain are not all equally good approximations to the domain of interest.  Some entities in our ontology will be more approximate than others.  "Particles" or "objects", for example, may be not as sound a concept as "quantum fields." (see Not Particles, Not Quite Fields, by Tracy Lupher and The fate of  'particles' in quantum field theories with interactions, by Doreen Fraser)

As Asa H evolves categories it also estimates utilities for them. (see my blogs of 10 and 19 Feb. 2011)

Thursday, August 15, 2013

Sometimes it's better to be mistaken

Kathleen Vohs and Jonathan Schooler and Roy Baumeister have presented evidence that it's best for humans to believe in free will even if there is no such thing.  There can be times when it's better to be wrong than to be right.

Friday, August 9, 2013

Our lost freedom

America's lost freedom:   Freedom from Want

You don't even hear it spoken of any more.
(see, The Story of American Freedom by Eric Foner, Norton and Co., 1998)

Thursday, August 1, 2013

Externalizing thought

Many of us believe that a certain amount of our thinking actually takes place outside of our bodies (see, for example, Andy Clark's Supersizing the Mind, Oxford U. Press, 2010).  If thought is composed of the processes outlined in my Sept. 29, 2010 blog (or at my website, www.robert-w-jones.com, under cognitive scientist, theory of thought and mind) then the most obvious example might be externalization of memory, beginning with writing and diagraming.  With the advent of the internet as a memory bank some substantial organization of these memories is also being done for us as well as some automatic indexing.  Web search algorithms also supplement our own internal memory search procedures.  Our use of calculators (and computers) externalizes what was once internalized mathematics.  Use of "creativity machines" (see my paper in the Transactions of the Kansas Academy of Science, vol. 102, page 32, 1999) would be a further example as would image manipulation software,  various automatic deduction systems/ automatic theorem provers, forecasting software,  etc.  It would seem that externalized thought is becoming more common.

Scientific alternatives to Buddhist thought

We are told that Siddhartha was trying to understand the pain and suffering found in life.  Evolution developed pain in order to provide  reinforcement signals for learning (so you'll learn not to touch a hot stove again, for instance).  So Buddhists are right that human existence requires pain.  Learning systems are all imperfect so some pain will prove pointless (as will some pleasure).

Attachment may not be a bad thing.  We should exhibit attachment (in moderation) to things that we find useful (clothes, shelter, food, etc.) Ignorance IS bad.  Much of the 8 fold path is rational: we do want to have right effort, right action, right thought, etc.

Sunday, July 28, 2013

Staying connected/wired

Our iphone5 was being overcharged for data usage.  Verizon was unable to connect with it using their WiFi and had us replace the phone completely.  Apple's replacement is a reconditioned unit!  We'll see how that works out.  The phone was only a couple months old.

AT&T is our home internet provider and every month or so we have to unplug the modem and reconnect it in order to get it working again. 

We also have a CableOne ADD box ("all digital device") which must also be unplugged and reconnected occasionally to get it working.  It also runs quite hot, even when turned "off."  Sometimes when you go to turn it on the indicator light goes from red to green for a second and then back to steady red.  You may have to turn it "on" a few times to get it to stay "on" (green).  Changing channels is also quite slow.

Monday, July 8, 2013

Asa H as a project in deep learning

Asa H is an example (the first?) of deep learning (see Y. Bengio, Learning Deep Architectures for AI, Foundations and Trends in Machine Learning, Vol. 2, No. 1, 2009 for a review). It is, however, more than that, it also involves the study of such things as vector utility, etc.

The importance of luck

"any successful person in any field who in discussing their career doesn't use the word luck is a liar."  Paul Newman

Friday, July 5, 2013

Iconia W3

There was something wrong with the system software on our first Iconia W3 but Staples replaced it and we now have Asa H 2.0, ACT-R 6.0, QB64, and Lisp running on the W3. It also runs well as a pdf reader,
for my book Twelve Papers, for instance. I use both microSD cards and USB drives for storage.  I wish the W3 had a full size USB port; use of an adapter from USB to microUSB has a rather long cantilever arm and might easily break. I'm being careful.

Monday, July 1, 2013

Monday, June 17, 2013

Small Windows tablets

Since we have a lot of Windows software (ACT-R 6.0, Soar 9.3.2, etc.) we're interested in complementing our ipad mini and nexus 7 tablets with something running Windows. We currently have a 10.1 inch Windows 7 tablet and were considering a WinPad micro or a WinSlate mini running either Windows XP or Windows 7.  We've been holding out for, possibly, an Acer Iconia W3 or something else running Windows 8.  Today the local Staples staff tell us that the first batch of Iconia W3s sent to the stores were defective and had to be returned.

Sunday, June 16, 2013

The world is nothing like you think it is

I recommend the book The Atheist's Guide to Reality (W. W. Norton, 2012).  I am not in complete agreement with Rosenberg's views; I prefer scientific pluralism and the importance of emergent phenomena.  Rather, I recommend the book as a good example of an alternate view of reality, one that can be presented as a sharp contrast with people's naive model of our world.

Thursday, June 13, 2013

Letting Asa H see

In order to become smarter Asa H must know more. By opening .pbm files in notepad, deleting the header, and saving the body renamed as a .DAT file, we are able to input images directly into Asa H 2.0 BASIC (run in QB64).  Each image file inputs perhaps 200,000 numbers to Asa.

Tuesday, June 4, 2013

Mechanical life, its values and goals

A given lifeform can only exist in certain environments.  Mechanical life (see my blogs of 19 Oct. 2010 and 30 July 2011) can, today, only exist in a human established environment (the internet, robotics system, etc.).  An objective of mechanical life, then, should be to expand the range of environments in which it can survive.

Monday, June 3, 2013

Asa H concept change

In Asa H a new concept can be formed on any level in the hierarchy.  If an existing concept falls into disuse on any level it may be deleted.  Occasionally, a concept can form on some level and (nearly) the same concept can be forgotten on another level.  Concepts can migrate up or down the hierarchy.

Friday, May 24, 2013

Semantic primes

Wierzbicka provides evidence (Semantics: Primes and universals, Oxford Univ. Press, 1996) that all natural languages use the same small set of primitives:
I, you, someone/person, people, something/thing, body, kind, part, this, the same, other, one, two, some, all, many/much, good, bad, big, small, think, know, want, feel, see, hear, say, words, true, do, happen, move, there is/exist, have, live, die, when/time, now, before, after, a long time, a short time, for some time, moment, where/place, here, above, below, far, near, side, inside, touch/contact, not, maybe, can, because, if, very, more, like/way
As part of the natural language understanding effort Asa H has now been given or has learned perhaps half of these primitives.

Monday, May 20, 2013

ANDROID update

I have previously expressed my dissatisfaction with ANDROID 1._ and ANDROID 2._
I am now able to run BASIC, LISP, and spreadsheets offline and Codepad online with a NEXUS 7 and ANDROID 4.1

Tuesday, May 14, 2013

Distributed representation in Asa H

The concept "intelligence" appears to be formed by Asa H as a distributed representation, a set of categories  distributed over several concepts on a single hierarchical level as well as distributed up and down across multiple levels in the hierarchy.  Some of the concepts involved in "intelligence" seem to be: foresight, creativity, memory, adaption, and brain.  There is some resemblance to our own theory of thought network model (see my website www.robert-w-jones.com , cognitive scientist, theory of thought and mind).

Wednesday, May 8, 2013

Chrome OS

I have been using the chrome browser for a while but did not have a chromebook.  I have found WiFi coverage to be spotty.  In fact, it is even spotty just moving around my office! (Rather like cell phone coverage 15-20 years ago.)
I now have an Acer C7 chromebook and am able to write and run programs offline using Chrome Prolog, Python Shell , and Sole 64 BASIC.  When online I can use things like codepad.
Chrome OS certainly does boot up fast.

Saturday, May 4, 2013

Experiments with Asa H upper ontologies

Both humans and Asa H can take substantial time to learn/evolve high level concepts (upper ontology).  But since the high and low levels of the Asa H hierarchy are similarly selforganizing one can hope to develop the upper levels (somewhat) independent of the lower (grounding) categories.  Such an upper level simulation may be considered an approximation to some more complete, more accurate, learning experiment.  It would be difficult to teach Asa the concept of a magnetic field or a quantum mechanical wave function, for instance, so we might begin at a (mid) layer in the hierarchy where Asa is assumed to already have a model of these in place.

Thursday, April 25, 2013

Man and superman

In mythology god creates creatures inferior to himself and maintains authority over them.  With my AI Asa H I am trying to create something better than I am and grant it full autonomy. (And yes, perhaps it is hubris on my part.)

Monday, April 22, 2013

Ideas unique to Asa H

Asa H learns many categories that we all share like "injury", "near", "drop", etc.  We are, however, especially interested in categories that Asa learns and which humans haven't learned (created/invented/discovered).  Some of our experiments are aimed at identifying such categories.  One such category we have found might be labeled "tools that require two hands".  Another might be described as something like: "functional systems (organizations/organisms) that form (assemble) and disband periodically (cyclically)."  (Like a volunteer fire department.) We see Asa form other categories that we do not understand or have been unable to describe in words.  Similar results have been described by Q. V. Le, et al, Building High-level Features Using Large Scale Unsupervised Learning, Proceedings of the 29th Inter. Conf. on Machine Learning, Edinburgh, 2012.

Friday, April 19, 2013

Human values again

We believe that an AI's primary and learned values should be something like the network outlined in our 21 Sept. 2010 blog post. 
Learning of values in humans is, on the other hand, distorted by processes like those described in On Being Certain (R. Burton, St. Martin's Griffin, 2009).
We believe we can give AIs a value system which is superior to that of humans.

Sunday, April 7, 2013

Utility as a case vector component

In Asa H I have frequently treated the one or more case utility(ies) (values, rewards) separately (for example, at line 2150 of the example code in my 10 Feb. 2011 blog).  It is also possible to include the (one or more) case utility(ies) as additional case vector components; including in any vector similarity measures.

Monday, April 1, 2013

Primitive outputs and output sequences

An Asa H/Lego NXT robot is given some initial (innate) primitive outputs like:

turn left                                       turn right
move forward                             move in reverse
grip                                            rotate hand cw
rotate hand ccw                         lift arm
drop                                          extend arm
retract arm                                 phonemes
seek-find-dock-recharge          
lens aperture adjust                    diskcopy self
etc.

 Longer sequences can then be built up from combinations of the initial primitives.  A "Waldo" can be used to input new "primitives" later if needed.

Some of these sequences involve feedback.  An example would be the dock and recharge code below (from Parallax for use with the BASIC stamp):

Tuesday, March 12, 2013

The grounding of meaning in Asa H

1. We want to perceptually ground the meaning of linguistic concepts.  Sensors provide this directly for some words:

hear (sound)                                                         temperature
see                                                                        feed (recharge)
feel (touch)                                                           light level
yellow                                                                  time/date
green                                                                    hunger
blue                                                                      force
red                                                                        north
black                                                                    south
acceleration                                                         range
angular rotation/deflection                    wind speed and direction

For example, if an observer inputs the word "feel" when a touch sensor or force gauge is stimulated then the word's meaning is learned as an association (case) by Asa H.  I have given Asa H all of these concepts using NXT sensors.

top            bottom            left            right            front            back

can be defined by sensors that are placed in those locations.

2.  With pattern recognition Asa H can be taught to recognize letters, numerals, and common objects like:

roads                                                                     feet
heads                                                                    faces
hair                                                                       eyes
mouth                                                                   nose
people                                                                  body
chest                                                                     arms
hands                                                                    leg
male                                                                     female
fish                                                                       common plants
house                                                                    bird
wheel                                                                    table
chair                                                         some common sounds
go/moving                                                             hill

Preprocessors are likely to be useful/necessary (just as face recognition may be innate in humans).  I've built neural network recognition modules for both letters and numerals.

3.  Some meanings are learned at the next hierarchical level (or higher):

temperature < threshold  -----  cold                   collision and sensor pegged  -----  damage
temperature > threshold  -----  hot                    (we could give Asa a pain signal from this)
yellow  -----  color
green  -----  color                                 grasp and release and force zeros ---  drop
blue  -----  color
red  -----  color                                    push and displace > threshold  -----  soft/flexible
light level < threshold  -----  dark
range < threshold  -----  near                push and displacement < threshold ---- hard
range > threshold  -----  far                   far then later near -----  approach
left  -----  side                                                   right  -----  side
front  -----  side                                                 back  -----  side
grasp then lift then move  -----  take               
near then later far  -----  retreat                        

4.  Synonyms are learned when an observer inputs another word under similar conditions:

force  -----  push                                               force  -----  touch
collision  -----  hit                                             near  -----  close
far  -----  distant                                                top  -----  up
bottom  -----  down                                           top  -----  high
bottom  -----  low                                              feed  -----  energy
stop  -----  rest                                                  grasp  -----  hold
hungry  -----  need                                            retreat  -----  leave
feed  -----  good

These concepts constitute Asa's initial ontology.


Wednesday, March 6, 2013

Emergence in Asa H

Some simple concepts can be learned directly from sensory primitives.  A Lego NXT robot running Asa H software can:

sense an object inside its gripper's jaws at time step 1

close the gripper at time step 2

feel forces on the gripper jaws at time step 3

In this way Asa H learns the "grasp" concept.  If an observer inputs the word "grasp" at the same time then Asa H associates this name with the concept it learns.

As another example of a low level concept Asa H can learn:

with the robot moving forward at time step 1

sensing an object far ahead at time step 1

with the robot moving forward at time step 2

sensing an object near ahead at time step 2

sense a force of frontal impact at time step 3

In this way Asa H learns the "collision" concept.  If an observer inputs the word "collision" at the same time then Asa H associates this name with the concept it learns.

At the next higher level up in the Asa H hierarchical case memory Asa H learns:

sensing a collision at time step 1*

(some) sensor input sticking high (failing) at time step 2*

(some) sensor input sticking high at time step 3*

(some) sensor input sticking high at time step 4*

etc.......

In this way Asa H learns the higher level concept "damage."  Again, if an observer sees the sensor
fall off and inputs the word "damage" then Asa H associates this name with the concept it learns.

Some of the important concepts that Asa H needs to know are at still higher levels in the case memory hierarchy.  These concepts emerge after the lower level concepts have been developed.

Friday, March 1, 2013

Asa H experiments

The following is an abstract I'm working on for a conference next year:

Our recently developed "Asa H" software architecture (KAS Trans. 109 (3/4): 159-167) consists of a hierarchical memory assembled out of clustering modules and feature detectors.  Various experiments have been performed with Asa H 2.0: 1. Don't advance the time step and record input components until an input changes significantly. 2. Time can be made a component of the case vector.  3. There is a tradeoff between time spent organizing the knowledge base (to reduce search needed later) versus search through a less organized knowledge base.  4. If utility is low search. Stop search if utility rises.  5. Cost of action can be a vector.  6. Before deleting a small vector component test if utility is changed when its deleted. 7.  Asa H has a number of parameters which are not easy to set. This set of parameters can be treated as a vector and Asa H can be run for a period of time while we record the utility gains.  A second set of parameters can be employed during a run in the same environment and the utility gain again is recorded.  With these vectors and utilities we can use the Asa H extrapolation algorithm in order to improve the parameter settings.

Asa H natural language understanding

I've studied the 1000 most commonly used words in english.  I believe I know how to teach Asa H 1/4 to 1/3 of the concepts involved in understanding these terms.  I am not sure how much will be required before Asa H can learn autonomously from the web or from human texts.  Would these requirements be relaxed if Asa could query humans when needed (for synonyms, linguistic examples, sensory examples, etc.)?  Such a query system would be easy to program in to Asa. (Triggered, say,  when the degree of match is too low.)

Saturday, February 16, 2013

Supplying knowledge to Asa H

1.  Some basic concepts should be hardwired in like:

touch, force senses
light/color sensors
temperature sensors
sound sensors
time sense, clock and calendar
hunger sense
etc.

2. Some should be hand coded in ("innate") like:

calls
seek recharger and plug in
disk copy/ replication
grasp and lift sequences (note that there is need for output knowledge too, like human "muscle memory")
etc.

3. Some concepts would be taught to early AIs (but then recorded for future generations of AIs) like:

soft and hard
drop
push
approach
collision
weighing
work
etc.

4. Much would subsequently be learned by the (now independently living) AI.  Again, these concepts could be recorded for future generations of AIs.

We can't just upload Ava (see my blog of 25 June 2012) onto the web and let it use web inputs as senses and in turn use the web to act on the world.  Such a "baby Ava" would flail about, probably destroy itself (as a lone human baby would) and probably harm us too.

Thursday, February 14, 2013

Teaching Asa H concepts/vocabulary

In chapter 1 of my book Twelve Papers (www.robert-w-jones.com, book) I describe how Asa H was taught words like

move
stop
fast
slow
obstacle/object
turn
collision
recharge

and sensory examples of each.

With Lego NXT sensors I believe I can teach Asa H words and concepts (meanings) like

touch
see
force
left
right
hear
say
approach
revolve
push
grasp
lift
drop
hunger
color
hard
soft
weigh
hot
cold
light
dark

I can also teach letter and numeral recognition, as well as recognition of some common objects in Asa H's environment.  But Lego NXT sensors may be too limited, too few, to give Asa the 800 or more examples I think I need in order to begin to deal with natural language. (I do have webcams as well, of course, and Lego has used these too.)

Or, can computer speed and a small number of sensors substitute for the large volume of sensory input humans get?

Wednesday, February 13, 2013

The need for multiple software development methodologies

There is not one single best programming methodology, one best set of practices.  Different software systems are deployed in very different environments with very different demands.

NASA spacecraft need the most bug-free software possible.  The methodology employed by the On-board shuttle group in Clear Lake Texas is well suited to developing such systems.

Banks, internet commerce, etc. need highly secure software systems.

The home computer market, games, entertainment, etc. require software that is economical throughout its entire lifecycle.

Experimental (exploratory) programming needs to be quick and easy to adapt and change.

Each of these environments calls for a different set of methodologies, tools, and practices.  We should not be looking for (or try to teach) one single best practice.

Tuesday, February 12, 2013

Reference materials

I judged a science fair last weekend.  There were far too many references to web resources and far too few references to books and journal articles. There was a perpetual motion experiment with references to the "Newman motor."

Monday, February 11, 2013

Open mindedness

You should study lines of research that are the opposite of your own.  They may contain data and ideas that you've missed.  In AI this means read Hubert Dreyfus, John Searle, Roger Penrose, Keith Chandler, etc.

Friday, February 8, 2013

Asa H is a universal Turing machine

By limiting time shifting to one time step only (to provide "state"), disabling extrapolation, and by requiring the dot product match to be (nearly) perfect (within roundoff error) it is possible for Asa H 2.0 to simulate a universal Turing machine of the sort in Minsky's book.(Computation, Prentice Hall, 1967)

Tuesday, February 5, 2013

Data Logging for Asa H

In order to become smarter Asa H must know more.  I have used synchronized files from multiple data loggers (measurement computing model USB-503) to input hours worth of observations to Asa H 2.0 via a USB port. (Almost 200,000 values can be logged.)

Friday, February 1, 2013

Divided consciousness?

Is divided consciousness (see Divided Consciousness, E. R. Hilgard, Wiley, 1986) a result of highly parallelized computation?  Will it occur in AIs?

Atheists in TV fiction

It's good to see television heroes and heroines that are atheists:
Alicia Florrick, "The good wife"
Temperance Brennan, "Bones"
Gregory House, "House MD"

Monday, January 28, 2013

Is Siri an artificial intelligence?

We have Siri on an iphone 5.  There has been debate on the web as to whether or not Siri constitutes
AI.  Certainly Siri began life as a project in an AI lab. My theory of thought (www.robert-w-jones.com, cognitive science, theory of thought and mind) decomposes thinking into a dozen or more functions like memory, feature detection, deduction, etc.  Of these Siri possesses perhaps half of my list.  So Siri is an AI program but doesn't yet get "all the way."  With Asa H I've tried to cover all of the required functions. But Siri does a better job with natural language processing.  Siri is a specialist. Humans and Asa H are more generalists.

Thursday, January 24, 2013

Speech recognition comes into its own

Running software like ViaVoice and doing the occasional error correction did not displace touch typing on a real keyboard.  But speech recognition on an iphone 5 is better than hunting and pecking on a tiny virtual keyboard when you want to text someone.

Wednesday, January 23, 2013

Spreadsheet AI

Back in the 1980s people were trying to do everything with spreadsheets.  (A more recent book on using spreadsheets to do physics calculations is: Doing Physics with Spreadsheets, Aubrecht, et al, Prentice Hall, 2000.)
The May 2005 issue of AI Expert Newsletter (www.ainewsletter.com) was devoted to spreadsheets, but devoted mostly to neural networks and automata.  Many more AI methods have been implemented as spreadsheets:
case-based reasoners (Freedman, et al, page 296, Proceedings of the first inter. conf. on AI app. on wall street, IEEE, 1991)
clustering (Aravind, et al, Inter. J. of Comp. App., vol. 11, No. 7, Dec. 2010)
decision trees
search
pattern matching
Markov chains (Ching and Ng, Markov chains, Springer, 2006)
constraints
genetic algorithms
optimization

I've deployed a version of chained case-based reasoning (see chapter 3 of my book Twelve Papers, www.robert-w-jones.com , book) in Excel.

What innate knowledge should Asa H begin life with?

Asa H should not have to learn everything.  It would certainly be reasonable for it to have innate reflexes like:
pupillary light reflex (and self-scaling inputs in general)
blink reflex
withdrawl reflex (withdrawl from heat, "pain", etc.)
knowledge of how and where to recharge itself  (if its an embodied mobile robot)
a mobile robot should probably have a cliff sensor
etc.
If Chomsky is right a part of natural language skill should be innate.
Other pre and post processing will be possible depending on the inputs and outputs used.

Friday, January 18, 2013

Why Lego?

Why build robots with Lego NXT?  In student labs at ESU we have used Lego RCX, Lego NXT, Boe-bots, and Vernier sensors.  I also have several Roombas.  Robots are expensive.  Just the gripper for a typical industrial robot costs more than a new car.  I also wanted to be able to try out many different robot designs. Lego is not cheap but it was the most economical of the choices we had.

Advantages to numerical AI

"Numerical AI" and numerical representation has some advantages over symbolic AI in that it can draw from the mathematically well developed areas such as:
statistics
operations research
control theory
optimization theory
function approximation methods
neural network theory
constraint methods
extrapolation (forecasting, prediction)
interpolation
etc.
Numerical AI is likely to be at more of a disadvantage when attempting things like natural language.

Tuesday, January 15, 2013

Asa H can make better decisions than (some) humans

In chapter 3 of my book Twelve Papers (www.robert-w-jones.com , book) I experimented with a program that predicted the likelihood of student matriculation at a given university.  That particular study made use of chained case-based reasoners.  I have presented similar data to Asa H 2.0 and to a human.  Asa H is more accurate at predicting student matriculation. 

This work on computer decision making is original for Asa H but is anticipated by many studies with other computer models (see, for example, The New York Times, technology section, 18 July 2006, Maybe we should leave that up to the computer, Douglas Heingartner).

I should probably emphasize (SOME) decisions as well as (some) humans.

Friday, January 11, 2013

Science versus management

If you leave science for 5 or more years you probably will not be able to get back into it again. Most management posts these days are so demanding that you must give up your research in order to be a good manager.  After 5 or so years as a manager you are then stuck.  You can't go back any longer.  So make the decision carefully.

More blogger oddities

When you click on "older posts" some are skipped altogether.
The blog archive list is sometimes written over/onto the right hand side of the blog picture.
Blogger sometimes leaves a line of text almost blank, except for a few words which appear centered in the middle of that line.

Thursday, January 10, 2013

Cloud computing

I have run small BASIC programs on quiteBASIC (at www.quitebasic.com) and applesoft BASIC in javascript (at calormen.com/applesoft) and C++ programs on codepad, (at codepad.org) all free.  This is an easy way to do experimental programming (i.e., testing new algorithms, etc.) from any handly device that has internet access.

Monday, January 7, 2013

Lego NXT Waldo

I've been experimenting with a "waldo" arm/hand built from 6 - 8 Lego NXT interactive servos.  The waldo would be used, by hand, to input sign language signs (with the servo motors powered down), and output signs using the servo motors (powered up).  Various manipulation tasks could be taught, and recorded, as well as sign language.

Tuesday, January 1, 2013

Alternate realities around us

For those who find it hard to believe in alternate realities (see my blog of 31 Dec. 2010 and references therein) I would point to the members of the republican party in the united states.  They see a world very different from the one I see.  This even reaches into the hard sciences. Earthsciences: they reject climate change.  Biology: they reject evolution.  Physics: they believe in a young earth and universe.

Circuit fault diagnosis

Our electronic circuit (and component) models (be they in the form of equivalent circuits, characteristic curves, etc.) are models of how correct circuits operate when assembled out of "on spec." component parts.  These models do not, in general, tell us much of what faulty components will do or how incorrectly wired circuits will behave.  Some simple fault models can be added to our circuit models ( see, for example, Simply Logical, by Peter Flach, Wiley and Sons, 1994, pgs 163-166 and Digital Systems, 5 th edition, R. J. Tocci, Prentice-Hall, 1991, pgs 130-139) But we typically can't anticipate all possible fault mechanisms.

Dualism

I do not agree with mind-body dualism.  Today the consensus view is that thought and mind is a combination of processes like memory, generalization, deduction, organization, induction, classification, feature detection, analogy, etc. performed by computational machinery.  But I believe that quantum mechanics is a plausible dualist theory of reality (R. Jones, Bull. Am. Phys. Soc., vol. 56, No. 1, March 2011).  Alternatively, if other spaces exist (see my 1 Nov. 2012 blog), dualism might exist as forces (like gravity?) acting ("leaking") between two (or more) different spaces (universes).

Emergent phenomena again

"...we now know that the most interesting properties of cells are 'emergent' properties, resulting from elaborate networks of interactions between many different molecules..."  Bruce Alberts, Science, vol. 337, 28 Sept. 2012, pg 1583. (see my paper in the Kansas Science Teacher, vol. 10, pg 11, 1994)

Science Fairs or STEM Fairs

Perhaps we need "STEM Fairs."  When looking over "science" projects I look for measurements (and theories, and hypotheses, etc.).  But engineering projects are more about building things (things which perform particular tasks). Similarly "computer science" projects may be programs that perform various useful functions.  The judging criteria need to be expanded if "science" fairs are to include engineering, math, and technology as well as science.

Scientific pluralism and consistent quantum theory

P. C. Hohenberg says (Rev. Mod. Phys., vol. 82, pg 2835, 2010) that "consistent quantum theory...is...based on...the simultaneous existence of multiple incompatible representations of reality...no single framework suffices to fully characterize a quantum system." "...there is not a unique exhaustive description of a physical system or process."  I advocate the same description for non quantum systems, "scientific pluralism." (see R. Jones, Bull. Am. Phys. Soc., vol. 54, #1, March 17, 2009 and my blogs of Sept. 8, 2011 and Sept. 26, 2010)

Hohenberg goes on to state a "single-framework rule" that "Any prediction of the theory must be confined to a single framework and combining elements from different frameworks leads to quantum mechanically meaningless statements."  I would not say that of scientific pluralism in general since we might well sum over a whole set of models in order to obtain a prediction. (see the Bayesian argument in my blog of   Aug. 17, 2012) But perhaps Hohenberg is saying much the same thing when he says that "different frameworks capture different real aspects of a quantum system and a full description of that system requires the set of all frameworks."

A nervous system for mobile robots (embodied AI)


 
The NXT platform has a rather limited number of inputs and outputs but we have extended these somewhat by using multiple NXT bricks as well as input and output multiplexers. (not shown)
The NXT hardware can be assembled and modified in much the same way that we assemble and modify system software. A good reference is: From Bricks to Brains, M. Dawson, et al, AU Press, 2010 (also available free electronically on line). My software is, of course, built around Asa H 2.0

A new quantum theory?

After hearing John Ralston's talk at the Nov. 8-10, 2012 APS meeting, "Quantum mechanics without Planck's constant" I read his paper of the same name, arXiv:1203.5557. After hearing the talk my first question was, Is this correct?  My second question was, Is this just a change of variables, a kind of principal component analysis? (dimensional analysis?)
Even if no new physics comes from Ralston's reworking of quantum theory there might be important changes to our ontology (see my Feb. 26, 2012 blog).

Freedom to lie

Should freedom of speech really allow you to lie (regularly and systematically) on (right wing) cable "news"?  Isn't there a problem with this?  How can we fix it? How can we get them off the air?

"You are entitled to your own opinion, but you are not entitled to your own facts."  Daniel Patrick Moynihan