Wednesday, November 13, 2013

Artificial intelligence exists today

There are many competing definitions of Intelligence:

"The ability to use memory,...experience,...reasoning,...in order to solve problems and adapt to new situations."
"The ability to learn,...and make judgments...based on reason."
"Intelligence is a very general mental capability that, among other things, involves the ability to reason, plan, solve problems,...learn quickly, and learn from experience."
"The ability to learn facts and skills and apply them."
"...ability to adapt effectively to the environment, either by making a change in oneself or by changing the environment or finding a new one...a combination of many mental processes..."
"...the general mental ability involved in calculating, reasoning, perceiving relationships and analogies, learning quickly, storing and retrieving information,...classifying, generalizing, and adjusting to new situations."
"Sensation, perception, association, memory,...discrimination,...and reasoning."
"...the process of acquiring, storing in memory, retrieving, combining, comparing..."
"...a cluster of cognitive abilities..."
"Any system...that generates adaptive behaviour..."
"...getting better over time."
"...effectively perceiving,...and responding to the environment."
"The ability to be able to correctly see similarities and differences and recognize things that are identical."

My Asa H 2.0 (and some other AI experiments) meets all of these criteria and is intelligent.  The question then is, how intelligent is it.

Humans and AIs don't occupy the same niche.  They don't eat the same foods, reproduce in the same way, occupy the same habitat. etc.  So asking which is "more intelligent" or "superior" is going to be an approximation at best.  Different people have different levels of intelligence and there are different sorts of intelligence as well so we should be quite happy to have an AI even if it is not as smart as the very smartest human. It can still be useful.  But for those people who seem to be satisfied only with  "human equivalent" AIs I can't help but note that:

Asa H:
         
performs control tasks better than humans
has more short term memory than humans
has more reliable memory than humans
can have more senses than humans
can multiply by diskcopying
can (via telepresence) be in many places at once

AIs have been able to:

handle statistics and probability better than humans can
operate with more consistency than humans
can remotely maneuver helicopters better than human pilots
can create patentable inventions (i.e., Koza's GAs)
can prove math theorems humans have not been able to solve (i.e., the Robbins problem)
can evaluate loan applications and predict student success better than humans can
can plan/schedule transportation problems faster than humans can
can solve arithmetic/accounting problems faster and more accurately than humans

etc.

On the other hand:

Humans currently have a richer set of emotions than AIs do.
Humans are better at natural language than AIs are (though an AI is better at jeopardy than humans are).









Monday, November 11, 2013

Giving Asa H more than 5 senses

While humans have only 5 senses AIs can have more.  With sensors from Lego, HiTechnic, Vernier, Mindsensors, and Measurement computing I have been able to give Asa H (embodied in a Lego NXT robot) sensitivity to light, sound, acceleration, touch(force), color, magnetic field,  temperature, voltage, and current.

Saturday, November 9, 2013

Intelligence and consciousness

When I talk about my AI work I am regularly asked about consciousness.  I usually reply that much of my best work is done by my subconscious.  But an important question still remains: Will an intelligence necessarily exhibit consciousness (at least part of the time)?  In some models/theories of intelligence and consciousness the answer is yes.

In order to handle the partial observability of nature an intelligence will require internal state and feedback loops to maintain/update it (Artificial Intelligence: A Modern Approach, Russell and Norvig, 3rd edition, page 51). Some intelligent activities require loops/feedback, some do not. (see Knowledge Engineering and Management, G. Schreiber, et al, MIT Press, 2000, page 125 for a task list)

One model holds that feedback is the key to consciousness (see my blog of 29 June 2011).  As in Elman and Jordan networks you can see your own actions and some of your thoughts/internal signals/internal state as feedback from the hidden layers.  Within these models intelligence does lead directly to consciousness.

Friday, November 1, 2013

Glue, wrappers, etc.

I much prefer starting with a working application and making small changes, one at a time, testing as I go.  Asa H has evolved in that way.  I recommend the methodology to students.

But AI is a vast field.  Most workers will only develop a single component exhibiting a single functionality.  Assembling a complete AI will then involve glueing these components together.  Over the years I have used a lot of code written by other people.  I don't want to reinvent the wheel.

I find that creating wrappers and software glue is difficult and error prone.  Keeping variable names straight is hard even when I'm dealing only with components that I wrote myself. Even if you use Hungarian notation or some other standardization the next guy doesn't. Components may have been designed and written on different hardware platforms, with different operating systems, different windowing systems, different compilers, with different libraries, etc.  Testing the glue software is also tricky.  I've not seen much published work on these issues and methodologies.