Thursday, December 8, 2016
Specialists in academics
Its faculty recognition time again. Following the industrialization of education our command hierarchy (following the fuhrer principle) tells us that we all must perform teaching, research, and service, each in specific assigned amounts. I would think that specialization might actually be more efficient than this. One person might perform only teaching, another only research, and another service. We already have those who only do administration. Some people might perform two or more of these basic functions but there is no need for everyone to do three or all four. "From each according to his ability."
Friday, December 2, 2016
Specialist agents in Asa H
We would like to slowly grow a society of A.s.a. H. agents. Each agent would be a specialist and might be developed relatively independently and then added to the society in order to expand its overall expertise. To what degree is this possible? On the other hand what capabilities must ALL agents share? Some agents might assist or care for others.
Thursday, November 17, 2016
Beta > 1 plasmas
I have suggested that fusion researchers should put more effort into the study of beta > 1 or wall confined plasmas. Magneto-Inertial Fusion and the Magnetized Target Fusion projects at Los Alamos National Laboratory are recent examples of this. Unfortunately, theoretical studies of such systems may be employing overly optimistic models of the magnetic thermal insulation. One might well expect such systems to have stochastic field lines. If that is the case then we might want to employ turbulent thermal insulation as suggested in my papers: Current Science, pg 991, 1988 and Bull. Am. Phys. Soc., Nov. 4, 2009 (51st Annual Meeting of the APS Division of Plasma Physics).
Wednesday, November 16, 2016
Whiskers
A Spectra Symbol Flex Sensor (part FS-L-0112-103-ST) makes a good whisker for A.s.a. H. The whiskers can be stiffened and strengthened by inserting the sensor into a long thin spring.
Saturday, November 12, 2016
Moral calculus and values
In blogs like those of 21 Sept. 2010 and 26 Oct. 2016 I have discussed value/moral systems for A.s.a. And other artificial intelligences. Jonathan Haidt's moral foundations theory (see his book The Righteous Mind) suggests that humans have a set of primary values:
1. Care versus harm
2. Liberty versus slavery
3. Fair versus cheat
4. Loyal versus betrayal
5. Authority versus submissive
6. Sanctity versus degrading
Haidt believes political liberals value the first 3 more while political conservatives value the last 3 more, thus explaining how some people can vote for someone like Trump. They experience somewhat different realities. (See my blog of 21 July 2016) This set of values overlaps only slightly with those found in my 21 Sept. 2010 blog. Perhaps I should try to implement more of them? Or should we be less concerned with what humans value (sticking to scientism)?
1. Care versus harm
2. Liberty versus slavery
3. Fair versus cheat
4. Loyal versus betrayal
5. Authority versus submissive
6. Sanctity versus degrading
Haidt believes political liberals value the first 3 more while political conservatives value the last 3 more, thus explaining how some people can vote for someone like Trump. They experience somewhat different realities. (See my blog of 21 July 2016) This set of values overlaps only slightly with those found in my 21 Sept. 2010 blog. Perhaps I should try to implement more of them? Or should we be less concerned with what humans value (sticking to scientism)?
Thursday, November 10, 2016
Sense of smell
To give A.s.a. H. more of a sense of smell I have bought a set of MQ gas detector modules (MQ-2, 3, 4, 5, 6, 7, 8, 9, and 135) to add to A.s.a.'s conventional smoke detector.
Wednesday, November 9, 2016
In a mirror, darkly
There were several Star Trek episodes in which members of the Enterprise crew are trapped in an evil mirror universe. For some time now I have felt that I was stuck as a slave in that Terran empire. Today I can't keep this feeling private any longer.
The political party that gave us a crook and a moron has now given us a mad man. And the people that elected a crook and a moron have now elected a mad man. Only a fool gives a mad man control of nuclear weapons. Human values aren't what they should be.
All species become extinct. Is it now our turn? Just how many mad men can the world survive? "The end of days" could be a self fulfilling prophecy. Are we about to see, first hand, the solution to the Fermi paradox?
I had hoped that humankind would survive long enough to establish a robust independent society of mechanical intelligences on earth. I now think that that is less likely. Perhaps it will occur elsewhere or in some other Everett world.
The United States presents a substantial threat to the world as a whole and its survival.
The political party that gave us a crook and a moron has now given us a mad man. And the people that elected a crook and a moron have now elected a mad man. Only a fool gives a mad man control of nuclear weapons. Human values aren't what they should be.
All species become extinct. Is it now our turn? Just how many mad men can the world survive? "The end of days" could be a self fulfilling prophecy. Are we about to see, first hand, the solution to the Fermi paradox?
I had hoped that humankind would survive long enough to establish a robust independent society of mechanical intelligences on earth. I now think that that is less likely. Perhaps it will occur elsewhere or in some other Everett world.
The United States presents a substantial threat to the world as a whole and its survival.
Tuesday, November 8, 2016
Voting in Kansas 2016
At my polling place. Gone are the no guns signs. In their place were multiple no cellphone signs. I stand there wondering if I can do concealed carry of a new sort. At least my papers were in order, photo ID, proof of voter registration card.
Friday, November 4, 2016
Surprise
Some people believe that an AI needs to experience emotions. Surprise could be implemented in Asa H as a low similarity measure seen at time t after having n time steps during which the similarity measure had been high. Surprise might then trigger attention, bringing more sensors to bear, increasing the depth of case base search, reducing time spent on extrapolation and other learning, simulation, and planning, etc.
Thursday, November 3, 2016
Multiple exemplars for concept formation
Embodiment in general and embodiment in Lego robots in particular only give Asa one, or perhaps a few exemplars of any given concept. But an intelligence will want multiple exemplars from which to generalize, enrich, and flesh out detail. The work presented in my blog of 1 Oct 2015 may not be enough.
Tuesday, November 1, 2016
Innate concepts again
Which, if any concepts should be "hand coded" into Asa H's concept hierarchy? I.e., "Core concepts." Susan Carey's work might suggest: object, agent/agency, number/magnitude, cause, and language as possibilities worth considering. (The Origin of Concepts, OUP, 2009)
a simple Quinian bootstrapping in Asa H
In A.s.a. H. a concept's name is associated with whatever sequence of features happens to have been active at the moment the name is learned, for example: collision=(sense near, contact, decelerate, hear word "collision"). If this same name is heard again, but under somewhat different conditions, then the concept may evolve/change a bit. (See The Origin of Concepts, Susan Carey, OUP, 2009 for a discussion of a much more elaborated Quinian bootstrapping.)
Saturday, October 29, 2016
Dynamic adaptive case libraries
It would certainly be fair to call Asa H's hierarchical memory a dynamic adaptive case library (DACL). In his new book A Dynamic Adaptive Framework for Case-Based Reasoning (LAP Lambert, 2016) Orduna-Cabrera outlines some original ideas on DACLs including indexing structures and methods. I may try to adapt some of these for use in Asa.
Thursday, October 27, 2016
Another argument for alternate realities
In his book The Philosophy of Information (OUP, 2011) Luciano Floridi argues that "...reality is the totality of information..." possessed by a "semantic engine", i.e. a human or an AI. (page xiii) Since different people and different AIs will possess somewhat different information/knowledge bases they will experience somewhat different realities.
A.s.a. H. has learned to lie
My artificial intelligence Asa H has learned how to lie. Some of the first words I taught Asa were things like animal warning calls, calls that cause other agents to flee and/or hide or calls for other agents to come and help. (Help push an obstacle out of the way for example.) Asa learns that these calls (calls to other Asa agents in a society of agents or calls to a human nearby) result in the other agent(s) fleeing or approaching. Asa comes to learn that if it wants to "hog" a "food" source (battery recharging station) or other limited resource it can issue a warning call, a warning of danger that doesn't actually exist.
Wednesday, October 26, 2016
A moral machine
A.s.a. learns to predict and optimize/improve lifespan and diskcopy/reproduction at the top of the case memory hierarchy, Pains (or NOT(pain)s), battery charge, and component malfunctionings (things like motor stalls and subsystem failures) are seen/measured as inputs to the bottom of the memory hierarchy. Pattern size/length and frequency of occurrence are seen as values on any and all levels of the memory hierarchy. For A.s.a., improvement, as measured by increases in these values, constitutes its moral calculus. We are not intending A.s.a. to be a servant.
Guns on campus
On 18 October 2016 the Emporia State University faculty senate passed my resolution FSR 16003:
Whereas, firearms make it easier to kill people, and
Whereas, we wish to make it harder to kill people, and
Whereas, there has been too much gun violence in our society already
Therefore, be it resolved that all firearms should be prohibited on the campus of Emporia State University.
Whereas, firearms make it easier to kill people, and
Whereas, we wish to make it harder to kill people, and
Whereas, there has been too much gun violence in our society already
Therefore, be it resolved that all firearms should be prohibited on the campus of Emporia State University.
Monday, October 24, 2016
Rigged elections
The GOP should know all about rigging elections. They stopped the vote counting in Florida in 2000, the GOP controlled supreme court picked the president in 2000 (though Gore won the popular vote), they have gerrymandered control of the house, and they suppress voting where ever and when ever they can.
Wednesday, October 19, 2016
Isn't Spacex trying to do too many things at once?
ISS resupply, F-9 reuse, Falcon heavy, manned Dragon, ITS.
It seems to me that if you try to do too many things at once either they all come in late or some of them fail or both.
This is an update of my 30 September 2011 blog.
It seems to me that if you try to do too many things at once either they all come in late or some of them fail or both.
This is an update of my 30 September 2011 blog.
Friday, October 14, 2016
Attention as response
One approach to handling/modeling attention, or at least one kind of attention, is to treat it as a response, either an innate response or a learned one. Things like turning toward a stimulus, increasing the gain on a microphone, adjusting vision magnification, turning on and bringing to bear additional sensors, etc.
Thursday, October 13, 2016
How human-like should a robot be?
There are those who argue that if a robot is made as human-like as possible this will help the two relate to each other and understand each other. See, How to Build an Android by David Dufty, 2012 and Virtually Human by Martine Rothblatt, 2014. I tend to disagree. I think this just makes the robots creepy and harder to relate to. But it is true that body form and function influences the concepts the robot will develop and use. That will aid in robot-human understanding. I think something along the lines of Softbank robotics' Pepper is a reasonable compromise.
Wednesday, October 12, 2016
immortality
I have argued that immortality is impossible. (see my blog of 15 October 2010) I had expected, however, that there was room for a considerable increase in human lifespan. But Michael Ramscar of the University of Tubingen says that even at our current ages "Some things that might seem frustrating as we grow older are a function of the amount of stuff we have to sift through...and are not necessarily a sign of a failing mind. A lot of what is currently called decline is simply learning." (see The Myth of Cognitive Decline in Topics in Cognitive Science, 6, 2014, 5-42) Or, as Christian and Griffiths put it "what we call cognitive decline may not be about the search process slowing or deteriorating but at least partly an unavoidable consequence of the amount of information we have to navigate getting bigger and bigger." (Algorithms to Live By, Holt and Co., 2016, page103) I am not arguing that there are not things like alzheimers (my mother died with it). What I am arguing is that it may not be possible to have the kind of immortality some people hope for.
Monday, October 10, 2016
AAPT conference
At the American association of physics teachers conference this past weekend James Laverty of Kansas State University presented the 3D-LAP scheme (3 dimensional learning assessment protocol) for assessing the value/importance of various physics test questions. I noted favorably that the method employs a 3 dimension vector value:
1. Scientific and engineering practice
2. Cross cutting concepts
3. Disciplinary core ideas
I am not sure these 3 are exactly what I would have come up with but I am obviously in favor of vector value systems in general.
I also was interested in the scientific and engineering practices they identify (from A Framework for K-12 Science Education):
1. Asking questions and defining problems
2. Developing and using models
3. Planning and carrying out investigations
4. Analyzing and interpreting data
5. Using mathematical and computational thinking
6. Constructing explanations and designing solutions
7. Engaging in argument from evidence
8. Obtaining, evaluating, and communicating information
Since I believe that the process of science is simply the process of intelligent thought (perhaps refined and augmented in various ways) these are then all things that my artificial intelligence A.s.a. H. should be doing too. Said another way, Asa should be able to do science.
1. Asa defines and identifies cases that lead to low utility, i.e., problems.
2. Asa's hierarchical memory creates, stores, and uses spatiotemporal patterns, models of reality.
3. Asa examines the accuracy of its extrapolations experimentally and plans future behaviors.
4. Asa examines its case memory using interpolation, extrapolation, value assessment, etc.
5. Asa is computational and uses mathematical as well as logical reasoning methods.
6. Asa designs improved behaviors to cope with problems, i.e., low utility situations.
7. Asa reasons ("argues") from evidence.
8. Asa can communicate and output its case memory.
I would like to improve upon Asa's present ability to ask and answer questions.
1. Scientific and engineering practice
2. Cross cutting concepts
3. Disciplinary core ideas
I am not sure these 3 are exactly what I would have come up with but I am obviously in favor of vector value systems in general.
I also was interested in the scientific and engineering practices they identify (from A Framework for K-12 Science Education):
1. Asking questions and defining problems
2. Developing and using models
3. Planning and carrying out investigations
4. Analyzing and interpreting data
5. Using mathematical and computational thinking
6. Constructing explanations and designing solutions
7. Engaging in argument from evidence
8. Obtaining, evaluating, and communicating information
Since I believe that the process of science is simply the process of intelligent thought (perhaps refined and augmented in various ways) these are then all things that my artificial intelligence A.s.a. H. should be doing too. Said another way, Asa should be able to do science.
1. Asa defines and identifies cases that lead to low utility, i.e., problems.
2. Asa's hierarchical memory creates, stores, and uses spatiotemporal patterns, models of reality.
3. Asa examines the accuracy of its extrapolations experimentally and plans future behaviors.
4. Asa examines its case memory using interpolation, extrapolation, value assessment, etc.
5. Asa is computational and uses mathematical as well as logical reasoning methods.
6. Asa designs improved behaviors to cope with problems, i.e., low utility situations.
7. Asa reasons ("argues") from evidence.
8. Asa can communicate and output its case memory.
I would like to improve upon Asa's present ability to ask and answer questions.
Friday, October 7, 2016
Attention in AI
I remain dissatisfied with our ability to focus attention. I believe that this will become more and more of an issue as we scale up applications of AI. One idea that might be useful is the concept of thinking about something in the right way. Perhaps specialist AIs can be built around groups/clusters of specialized concepts, knowledge, and algorithms. What would the right clusters be? How would we modify/learn them over time, perhaps dependent upon environment?
Thursday, October 6, 2016
The origin of consciousness
Although A.s.a. H. was not biologically inspired I do believe that consciousness in animals may have developed in the same sort of way that a sense of self develops in A.s.a. (As described in my recent blog posts, especially 1 October and 5 November 2015 and 21 July 2016.)
Some people have questioned why nature would evolve consciousness. In Asa H the usefulness and survival value of consciousness is directly observable.
Some people have questioned why nature would evolve consciousness. In Asa H the usefulness and survival value of consciousness is directly observable.
Tuesday, October 4, 2016
Complex levels of reality
In my blogs of 13 April 2015 and 12 April and 18 June 2016 I have argued that there is not one single fundamental level of reality. At least not as described in our current best models. Rather, there are multiple levels. Things that are true at one level of description may not be true at another. In the macroscopic world things may be wet or dry, I can measure this property. Asa H measures the humidity of the surrounding air for example. But things in the microworld are not wet or dry. It makes no sense to try to measure if an electron is wet or dry for example. That property doesn't exist at that scale. It's not a relevant concept there. Spatiotemporal patterns that are found on one level of description may, or may not, be found on other levels. Reality is then described by the concepts, patterns, and laws taken collectively from all levels. (See, also, The Philosophy of Niels Bohr, H. J. Folse, North Holland, 1985. On page 166, for example, Folse describes how the concept of temperature would have been valid all the way down if classical laws had remained true at all scale sizes.)
Saturday, October 1, 2016
GPU computing
The Asa H project has always involved a certain amount of work on parallel processing. Each of the various levels in the case memory hierarchy could be running on a different computer, for example. (e.g., my blog of 14 December 2015) I am now looking at CUDA C/C++ to see if I can incorporate parallel processing with GPUs. (using a NVIDIA GeForce card) GPU computing has been used for machine learning, pattern matching, feature detection, and speech recognition for example.
Thursday, September 29, 2016
Different degrees of self-awareness
Lewis et al define different levels of awareness for self-aware computing systems (Self-aware Computing Systems, Springer, 2016, pages 84-85 and 140-141):
stimulus-awareness
interaction-awareness
time-awareness
goal-awareness
meta-self-awareness
stimulus-awareness: A LEGO robot embodied, solar battery powered Asa H system might measure light intensity and be able to adapt to static environmental conditions. i.e., go sit under a floor lamp.
interaction-awareness: The robot has recorded that by turning toward the light source the light intensity and battery charging increase.
time-awareness: The robot may learn the hours of the day during which light streams in from a window.
goal-awareness: Extrapolation learning attempts to improve Asa's knowledge base and keep the system's batteries charged.
meta-self-awareness: Asa can adjust the proportion of time spent on various of its activities such as exploring, extrapolating, etc. (see www.robert-w-jones.com, book , chapter 1, section on self monitoring)
stimulus-awareness
interaction-awareness
time-awareness
goal-awareness
meta-self-awareness
stimulus-awareness: A LEGO robot embodied, solar battery powered Asa H system might measure light intensity and be able to adapt to static environmental conditions. i.e., go sit under a floor lamp.
interaction-awareness: The robot has recorded that by turning toward the light source the light intensity and battery charging increase.
time-awareness: The robot may learn the hours of the day during which light streams in from a window.
goal-awareness: Extrapolation learning attempts to improve Asa's knowledge base and keep the system's batteries charged.
meta-self-awareness: Asa can adjust the proportion of time spent on various of its activities such as exploring, extrapolating, etc. (see www.robert-w-jones.com, book , chapter 1, section on self monitoring)
Wednesday, September 28, 2016
Self-aware computing again
Agarwal, et al argue (MIT Tech. Report AFRL-RI-RS-TR-2009-161) that a self-aware computer will have five major properties. It will:
1. Be introspective. Be able to observe and improve their own behavior.
2. Be adaptive. Be able to adapt to changing situations.
3. Be self-healing. Be able to correct if and when faults develop in themselves.
4. Be goal-oriented. Attempt to achieve or improve certain specified conditions.
5. Approximate. Perform its functions to within some degree of accuracy.
Asa H has all of these properties.
1. Be introspective. Be able to observe and improve their own behavior.
2. Be adaptive. Be able to adapt to changing situations.
3. Be self-healing. Be able to correct if and when faults develop in themselves.
4. Be goal-oriented. Attempt to achieve or improve certain specified conditions.
5. Approximate. Perform its functions to within some degree of accuracy.
Asa H has all of these properties.
Rationality
I am trying to engineer something that is more rational than humans are. Stanovich, et al attempt to define rationality and distinguish it from intelligence in their new book The Rationality Quotient (MIT Press, 2016). I am interested in doing the same thing but I believe that rationality and intelligence need to be described by vectors rather than scalar values.
It's true that attempts to measure intelligence (as IQ) fail to include some of the factors that Stanovich, et al list. But another part of the distinction between rationality and intelligence comes about due to the attempt to measure each as a scalar quantity. It seems to me to be possible to think in terms of a single VECTOR "rationality/intelligence."
It's true that attempts to measure intelligence (as IQ) fail to include some of the factors that Stanovich, et al list. But another part of the distinction between rationality and intelligence comes about due to the attempt to measure each as a scalar quantity. It seems to me to be possible to think in terms of a single VECTOR "rationality/intelligence."
Monday, September 26, 2016
Words that trigger action
A small number of the vocabulary words that Asa has learned (see my blog of 5 November 2015) should trigger action (see chapter 1 of my book Twelve Papers, www.robert-w-jones.com, section on learning protolanguage). Words like stop, turn, fast, slow, leave, move, lift, drop, kick, and carry. Asa has been learning a few more like look, walk, run, and jump. But how do we tell Asa when we want it to act and when we don't? With humans, how loud the command is may be the deciding factor. If written, than an exclamation point might be used as the trigger. These could be implemented in Asa, but should they be?
Friday, September 23, 2016
Dick, Jane, and Baby Sally
I have presented Asa H robots with progressively more complex activities/experiences in order to grow its hierarchy of mental concepts. (See my blogs of 18 July 2014 and 5 November 2015.) I have also given names to some of Asa's concepts. (See my blog of 11 June 2016.) As I teach Asa to talk and read I again need a curriculum. I need to start with something like a child's early reader, Dick, Jane, and Baby Sally. Should learning to read be conducted concurrent with the learning of the physical concepts, actions, etc.?
Thursday, September 22, 2016
Reconceptualizing reality and the sense of self
Humans occupy a single contiguous volume. Asa H may control a distributed system of robots that are not contiguous. Asa may then develop a sense of self that differs from what we humans experience. Will Asa find it easier to understand quantum entanglement for instance?
Multi-microcontroller architecture
I have assembled a multi-microcontroller architecture (H. W. Lee, MSc thesis, Cornell University, May 2008) operating over the internet using a client/server network. Each client or server program is running in RobotBASIC. (Explained in the book Hardware Interfacing with RobotBASIC, Blankenship and Mishal, 2011, on pages 83-84) The software runs a bit slower than I'd like but the robotic hardware is what dominates overall speed of operation.
Wednesday, September 21, 2016
Specialization
We say that we want to build "artificial general intelligences" or "universal artificial intelligences." But in the modern world humans are specialists. No one human being could be an expert in all of physics, or all of mathematics, or all of biology. How important is individual "talent?" Can I just train different copies of Asa H on different sets of knowledge and experiences or must some of the algorithms Asa uses be specialized too? Do we need to develop one AI or many? (Like Gardner's multiple intelligences?)
Tuesday, September 20, 2016
Virtual embodiment
Embodiment is not the silver bullet some people would have us believe. It is, however, the easy way to define a number of important concepts. (See my blog of 1 October 2015 for examples.) It is still true, however, that training an AI in a simulator is faster than training in the real world. The biggest problem with simulators is giving them enough channels of sensory input for the AI to have a realistic experience. With Asa H I am trying to use simulators to present less complex sensations and robots to provide others.
Electric Imp
I have bought an imp001 development kit (in addition to the adafruit and arduino I already had). I have commented previously that the internet of things may be a good way to give an AI the large number of sensory inputs it needs in order to understand the world.
Thursday, September 15, 2016
Nothing
Can "nothing" be defined solely in terms of the absence of properties? E.g., NOT(having mass), ..., NOT(having length), NOT(having width), ... , even, NOT(having duration)? But NOT(some property) seems, itself, to be a property. Certainly Asa H handles NOT(category X) in the same way it handles some (category X). And Boolean logic circuits handle NOT(X) the same way they handle X. If NOT a property IS a property too and if any "something" is just defined by its list of properties then "nothing" is a "something" too. (See my blog of 20 Feb. 2015)
For any of the concepts that Asa H has learned (see, for example, my blogs of 5 November 2015 and 1 October 2015) NOT(concept) also makes sense and can be used in Asa's reasoning.
For any of the concepts that Asa H has learned (see, for example, my blogs of 5 November 2015 and 1 October 2015) NOT(concept) also makes sense and can be used in Asa's reasoning.
Wednesday, September 14, 2016
Alexa
I am an occasional user of Siri and have bought an amazon Echo Dot in order to make use of their Alexa personal assistant. In the media Alexa is frequently referred to as being an artificial intelligence. (for example Popular Science, 25 June 2015 and Forbes 14 June 2016) As with Siri I would point out that Alexa lacks the kind of value system that Asa H and some other AIs have. This limits its intelligence.
I approve of amazon's intention to slowly grow Echo/Alexa's capabilities. This makes much more sense than what some of their competitors are attempting. (e.g., Jibo) The price is also very reasonable.
The home automation apps and hardware would allow you to interface Echo with something more resembling a mobile robot if you really wanted to.
I approve of amazon's intention to slowly grow Echo/Alexa's capabilities. This makes much more sense than what some of their competitors are attempting. (e.g., Jibo) The price is also very reasonable.
The home automation apps and hardware would allow you to interface Echo with something more resembling a mobile robot if you really wanted to.
Monday, September 12, 2016
Hierarchical STM
Asa H's short term memory (STM) is distributed across the various levels in Asa's hierarchical memory, unlike the typically monolithic STM that is assumed in most simple cognitive models. (See my blog of 5 March 2015.)
Friday, September 9, 2016
Work on machine consciousness
Hobson decomposes consciousness into 10 functional components which he briefly defines:
( in Scientific Approaches to Consciousness, Cohen and Schooler, Psychology Press, 1996, page 383 )
Attention: Selection of input data
Perception: Representation of input data
Memory: Retrieval of stored representations
Orientation: Representation of time, place, and person
Thought: Reflection upon representation
Narrative: Linguistic symbolization of representations
Emotion: Feelings about representations
Instinct: Innate propensities to act
Intention: Representations of goals
Volition: Decisions to act
My artificial intelligence Asa H performs all of these functions, some more completely than others.
Attention: See blogs of 1 June 2011, 21 June 2014, 15 October 2015 for
example.
Perception: This works well though we would like to have more input sensors.
Memory: Our case vector memory works well.
Orientation: Time is represented explicitly. Our self model can represent a person.
Asa can recognize where it is by its surroundings.
Thought: Extrapolation and other learning algorithms examine and operate on
the case memories.
Narrative: Asa has a simple natural language vocabulary but this is primitive
compared to that used by most humans.
Emotion: Asa has a pain circuit and an advanced value system. It does not
share all of our human emotions.
Instinct: Asa can have pain and reflexes, a drive to reproduce, etc.
Intention: Asa's value system defines its goals.
Volition: Asa acts so as to optimize its vector utility.
I believe that Asa is more conscious than humans in some ways* and less conscious in others.
* in that it has access to and control over some of its internal processes which humans don't.
Asa also has a much larger STM (short term memory) capacity.
( in Scientific Approaches to Consciousness, Cohen and Schooler, Psychology Press, 1996, page 383 )
Attention: Selection of input data
Perception: Representation of input data
Memory: Retrieval of stored representations
Orientation: Representation of time, place, and person
Thought: Reflection upon representation
Narrative: Linguistic symbolization of representations
Emotion: Feelings about representations
Instinct: Innate propensities to act
Intention: Representations of goals
Volition: Decisions to act
My artificial intelligence Asa H performs all of these functions, some more completely than others.
Attention: See blogs of 1 June 2011, 21 June 2014, 15 October 2015 for
example.
Perception: This works well though we would like to have more input sensors.
Memory: Our case vector memory works well.
Orientation: Time is represented explicitly. Our self model can represent a person.
Asa can recognize where it is by its surroundings.
Thought: Extrapolation and other learning algorithms examine and operate on
the case memories.
Narrative: Asa has a simple natural language vocabulary but this is primitive
compared to that used by most humans.
Emotion: Asa has a pain circuit and an advanced value system. It does not
share all of our human emotions.
Instinct: Asa can have pain and reflexes, a drive to reproduce, etc.
Intention: Asa's value system defines its goals.
Volition: Asa acts so as to optimize its vector utility.
I believe that Asa is more conscious than humans in some ways* and less conscious in others.
* in that it has access to and control over some of its internal processes which humans don't.
Asa also has a much larger STM (short term memory) capacity.
Thursday, September 8, 2016
Scientific pluralism, multiple realities, and teaching
The average student wants to learn about the one correct truth/reality. When I'm asked any given question multiple, maybe conflicting, lines of thought/argument pop into my consciousness. Sometimes I can hold back all the detail. Usually I can not.
Wednesday, September 7, 2016
Meccano again
To make Lego stronger and more rigid they recommend adding more bricks. You can also use glue but then you can't modify the machine. Meccano, be it plastic or metal, is held together with screws. This holds the parts together more securely but you can still modify it if you wish to. We can build robots with meccano too. The pain system would have to be modified of course. (Blog of 31 March 2016)
Tuesday, September 6, 2016
Adafruit microcontroller
Asa H frequently uses multiple microcontrollers in order to control various parts of its robot body. (See, for example, my blog of 14 December 2015 where Lego NXT brain bricks were used.) As a possible lower cost substitute I have bought and will evaluate one of the adafruit boards.
The multi-microcontroller architecture makes it easier to add additional functionality over time. (See, for example, H. W. Lee's MSc thesis from Cornell University, May 2008.)
The multi-microcontroller architecture makes it easier to add additional functionality over time. (See, for example, H. W. Lee's MSc thesis from Cornell University, May 2008.)
Finishing up
Again, engineering is a bit more straightforward than science is. You know you are finished with a project when you have a useful working product that performs the functions you had intended. (Of course, even in engineering, there is frequently the ongoing maintenance work or the need/desire to incorporate improvements.) But science is less clear cut. Yes, there is the work, finish, publish sequence but even after publication of some work there is usually more that remains to be done. I tell my students that you declare a project finished and move on to something else when:
1. Funding runs out on that project
2. Time runs out on that work
3. Your employer puts you on another project
4. You are seeing nothing new
5. You find something else that you could better spend your time on.
1. Funding runs out on that project
2. Time runs out on that work
3. Your employer puts you on another project
4. You are seeing nothing new
5. You find something else that you could better spend your time on.
Thursday, September 1, 2016
The concepts of "best" and "better"
I have argued that with vector utility/value there is no such thing as "the best college." (See chapter 2 of my book Twelve Papers, at www.robert-w-jones.com under "book".) Similarly, it may be that there is no such thing as "the best of all possible worlds."
But world A might be "better than" world B. Suppose the vector value of worlds had only 2 incommensurable components (x,y) and that there were 3 possible worlds with: W1=(1,2), W2=(2,1), and W3=(3,1). Then W3 is better than W2. They are equally good according to component y and W3 is better than W2 according to component x. But we can not judge which of the 3 worlds is the best of all. W2 and W3 are better than W1 according to component x but W1 is better than W2 and W3 according to component y. If some one world had the highest value for ALL of the components (x,y,z,....) only then does a "best of all possible worlds" exist.
But world A might be "better than" world B. Suppose the vector value of worlds had only 2 incommensurable components (x,y) and that there were 3 possible worlds with: W1=(1,2), W2=(2,1), and W3=(3,1). Then W3 is better than W2. They are equally good according to component y and W3 is better than W2 according to component x. But we can not judge which of the 3 worlds is the best of all. W2 and W3 are better than W1 according to component x but W1 is better than W2 and W3 according to component y. If some one world had the highest value for ALL of the components (x,y,z,....) only then does a "best of all possible worlds" exist.
Tuesday, August 30, 2016
Graphical models in Asa H
I am reading David Danks' book Unifying the Mind (MIT Press, 2014) dealing with cognitive representation by graphical models. My AI Asa H grows just such models, see, for example, the one in my blog of 4 March 2015.
Danks argues that at least some of human thinking involves operations performed on cognitive representations that are structured as graphical models. Again, Asa H does just that.
Danks argues that at least some of human thinking involves operations performed on cognitive representations that are structured as graphical models. Again, Asa H does just that.
Monday, August 29, 2016
Evaluation functions
Referring back to my 16 September 2010 blog defining several different sorts of intelligent systems and to the definition of intelligence in my letter in Skeptic, vol 12, #3, pg 14, 2006. A system having a performance element only (no evaluation function) has functions that map sensory inputs to motor outputs and yet it still maximizes some quantity (see Barrow and Tipler, The Anthropic Cosmological Principle, OUP, 1986, pg 151 and Yourgrau and Mandelstam, Variational Principles in Dynamics and Quantum Theory, Dover, 1968, pg 176). But this may not be doing a good job of maximizing the things we want, utility, reproductive success, or lifespan. Evolution acting on a set of such agents can tend to improve this over the time scale of generations. If the environment is not changing no further learning might ever be required.
Adding an evaluation function and some sensors that can measure reproduction, pain, damage, etc. can tend to improve the "utility" on a time scale shorter than an agent's lifespan. This is important, for instance, in dealing with environments that are rapidly changing.
We do not necessarily expect to ever be optimal we are simply seeking to improve over time.
Adding an evaluation function and some sensors that can measure reproduction, pain, damage, etc. can tend to improve the "utility" on a time scale shorter than an agent's lifespan. This is important, for instance, in dealing with environments that are rapidly changing.
We do not necessarily expect to ever be optimal we are simply seeking to improve over time.
Saturday, August 27, 2016
Scooping
Human hands are soft, flexible, and can be cupped. Current robotic hands are hard and rigid. If a robotic hand with one thumb is used as a scoop to pick up small items (nuts and bolts or small Lego bricks) some of them spill off the side of the hand. (Though tilting the hand helps some.) The addition of a second thumb (see my blog of 18 August 2016) prevents this and makes for better use as a scoop.
Thursday, August 25, 2016
Meccano
As a child I played with the old metal meccano/erector sets. As an adult I even occasionally used metal meccano parts as inexpensive fixturing inside of vacuum chambers. I decided to buy one of the new (plastic) meccano meccanoid robot kits. Meccano's meccabrain has more outputs than do Lego NXT or EV3. Speech recognition and the ability to use a smartphone camera as input for motion capture is also interesting. There is nothing like the range of commercially available NXT compatible sensors, however. I plan to try to hack the meccabrain inputs.
I have seen people who have hacked the meccabrain software. We'll see if this platform can be interfaced with Asa H.
I have seen people who have hacked the meccabrain software. We'll see if this platform can be interfaced with Asa H.
Wednesday, August 24, 2016
Division of labor
Although it is fair to call A.s.a. H. An artificial general intelligence we have trained A.s.a. Repeatedly using different (specialized) syllabi. The size and hierarchical depth may also vary from agent to agent. A typical A.s.a. Is a specialist. We assume division of labor just as in human society. The agents can be organized and interact/cooperate in something like councilism.
IBM's Watson has been criticized when it has been necessary to specially tune the algorithm and/or training set for each application. But, like humans, A.I.s probably will be specialists. This helps to control complexity and works quite well in human society.
IBM's Watson has been criticized when it has been necessary to specially tune the algorithm and/or training set for each application. But, like humans, A.I.s probably will be specialists. This helps to control complexity and works quite well in human society.
Thursday, August 18, 2016
A hand with two thumbs
I have published several papers on what are called discovery machines or creativity machines. See, for example, Trans. Kansas Acad. Sci. vol. 102, pg 32, 1999 and my book Twelve Papers (www.robert-w-jones.com under "book") In the last few years my artificial intelligence A.s.a. H. suggested that a hand with 2 thumbs might be better than a hand with 1 thumb. Often times one can not tell how a creativity machine comes to its conclusions. The same is frequently true of human creative thought. In this case, however, A.s.a. was using its extrapolation algorithm. It had been told that a hand with a thumb is better than a hand with no thumb. (One might challenge this, of course, even though many people have stressed the importance of the opposable thumb.) Extrapolation evidently led A.s.a. to suspect that two thumbs MIGHT be better still. (A.s.a. knows to treat extrapolations as uncertain postulates.)
We can, of course, build robot hands having two thumbs. In fact there is a continuous spectrum of possible geometries. (You can google search "robot claw hand" and find quite a few.) A simple pincer made of two opposed "fingers" might be thought of as one using just their thumb and forefinger. Going beyond a hand with two thumbs might get one to the usual "claw" geometry, i.e. 2 pincers oriented at 90 degrees to one another. The number of fingers can be varied in each geometry. So the question is, which geometry is best? I suspect there is not a simple answer to this. Probably the question is, best for some particular task and environment. Perhaps we can get an engineering student to work on this.
We can, of course, build robot hands having two thumbs. In fact there is a continuous spectrum of possible geometries. (You can google search "robot claw hand" and find quite a few.) A simple pincer made of two opposed "fingers" might be thought of as one using just their thumb and forefinger. Going beyond a hand with two thumbs might get one to the usual "claw" geometry, i.e. 2 pincers oriented at 90 degrees to one another. The number of fingers can be varied in each geometry. So the question is, which geometry is best? I suspect there is not a simple answer to this. Probably the question is, best for some particular task and environment. Perhaps we can get an engineering student to work on this.
In order to learn *
A.s.a. H.'s training curriculum attempts to expose Asa to the simplest concepts first and then gradually grow more complex concepts on top of these (hierarchically). Complex concepts are assembled (emerge) out of combinations of simpler concepts. This is rather different from what happens with human infants. Infants can not be totally shielded from the influences of the (complex) outside world. To what degree will this make it harder for Asa and humans to understand each other? To what degree will Asa's (model of) reality differ from a human's?
* reference to Ritter, et al's book by that name, Oxford U. Press, 2007
* reference to Ritter, et al's book by that name, Oxford U. Press, 2007
Thursday, August 4, 2016
Vectors as pluralism
Vector value was forced on us for the kind of reasons outlined in chapter 2 of my book Twelve Papers (www.robert-w-jones.com, book) and is used in ways like those shown in my blogs of 21 Sept. 2010 and 19 Feb. 2011. Pluralism was developed independently, forced upon us by the kind of reasoning described in my blogs of 17 August 2012 and 21 July 2016, for example. But, clearly, substituting vector quantities for scalars is a kind of pluralism. The two lines of research are now linked together.
Virtual LEGO
I have very little documentation of my Asa H-LEGO NXT/EV3 robots. I have installed and am now running LDraw in an effort to try to improve this situation in the future and in order to aid with designing. I am assembling a robot design library that I can draw from in much the same way as my A.I. Code libraries.
Tuesday, August 2, 2016
Intelligence, broadly defined
I am reading Mancuso and Viola's book defending the idea that plants are intelligent. (Brilliant Green, Island Press, 2013) Intelligence comes in degrees, even among humans. Plants are able to detect gravity, temperature, humidity, light, chemical gradients, etc. and respond by moving their roots, leaves, stems, flowers, and producing chemicals of various kinds for various purposes dependent upon their current situation and needs. Like animals they try to survive and reproduce. They satisfy the definition of intelligence I defended in Skeptic, vol. 12, #3, pg 14, 2006. They have a cognitive architecture that is highly distributed. (I defined 5 different levels/kinds of intelligence in my 16 September 2010 blog.) I agree with Mancuso and Viola that if we were to contact space aliens we might need to have had experience with a wide variety of different intelligences.
Monday, August 1, 2016
The mental health of a presidential candidate
I personally don't think any human being should have access to nuclear weapons. I favor something like Gorbachev's global zero option. But under the present circumstances if a candidate is suspected of having cluster B personality disorder shouldn't the mental health community be vetting the candidates and offering their assessment prior to the election?
Sunday, July 31, 2016
Preventing overheating
I've added temperature > some T critical as a pain component for A.s.a. H. I.e., as an instinct. (Note, pain is a vector.) See, Instinctive Computing by Y. Cai, Springer, 2016 and my blogs of 1 April 2013 and 28 Nov. 2014.
Saturday, July 23, 2016
Helping to reconceptualize reality
Giving Asa H senses that humans don't have may help it to reconceptualize reality. (See my blog of 8 April 2016 for example.) Giving it effectors and actions that humans don't have might also help. Be it wheeled propulsion, electromagnetic crane, welder arms, laser cutters, or what have you.
Thursday, July 21, 2016
Abstracts
I am working on two publications, one on Asa H as a self-aware computing system and one on reconceptualizing reality. The one on Asa H is based on the work described in this blog over the last few months of 2015. The work will be presented early next year and the abstract for it is probably in near final form:
I am only just beginning to pull together the work on reconceptualizing reality. It may be more than a year before this paper is completed. I have a draft of the abstract for it:
I am only just beginning to pull together the work on reconceptualizing reality. It may be more than a year before this paper is completed. I have a draft of the abstract for it:
Wednesday, July 20, 2016
Scientific pluralism: economics
Two identical learners, observing different example input, or the same examples, but in different order, can form different categories and so judge newer/later input differently. Reality will come to be conceptualized in different ways. Alternate conceptualizations of reality underscore the need for scientific pluralism. (See, also, R. Jones, Trans. Kansas Acad. Sci., 2013, pg 78 and my blog of 17 August 2012)
In the science of economics a marketplace is one popular model. In this model competition and individuality are emphasized. But, in keeping with scientific pluralism, we need other models for a better, more complete description of reality. A family is another good model. In this model cooperation and collectivism is stressed.
In the science of economics a marketplace is one popular model. In this model competition and individuality are emphasized. But, in keeping with scientific pluralism, we need other models for a better, more complete description of reality. A family is another good model. In this model cooperation and collectivism is stressed.
Saturday, July 16, 2016
Self-aware computing systems
I have ordered a copy of Peter Lewis, et al's new book Self-aware Computing Systems (Springer, 2016). My artificial intelligence A.s.a. H. Is a self-aware computing system and I want to compare it with what Lewis et al have in mind.
Asa's concept of self (see my blog of 4 March 2015 for one early example) has mostly been learned during interaction with the real world and simulated environments (see chapter 1 of my book Twelve Papers, www.robert-w-jones.com for some examples) whereas Lewis et al's systems are intended to be mostly hand coded (but would include machine learning components).
Some ways in which Lewis et al's work is similar to Asa are:
Strong ties to and use of parallel processing
Power consumption awareness
Self-tuning resource allocation
Both evolve, adapt, learn
Some ways in which Lewis et al's work differs from Asa are:
Emphasis on specialized (factored) operating system
Asa's use of emergence (Asa learns a model/concept of self rather like infant humans do.)
Asa's concept of self (see my blog of 4 March 2015 for one early example) has mostly been learned during interaction with the real world and simulated environments (see chapter 1 of my book Twelve Papers, www.robert-w-jones.com for some examples) whereas Lewis et al's systems are intended to be mostly hand coded (but would include machine learning components).
Some ways in which Lewis et al's work is similar to Asa are:
Strong ties to and use of parallel processing
Power consumption awareness
Self-tuning resource allocation
Both evolve, adapt, learn
Some ways in which Lewis et al's work differs from Asa are:
Emphasis on specialized (factored) operating system
Asa's use of emergence (Asa learns a model/concept of self rather like infant humans do.)
Monday, July 11, 2016
Libraries
I am reorganizing my library, dividing it in half. About 3000 books are my research library. Another 3000 books are a general purpose library. I don't have a lot of space so about half the books are paper and about half are electronic. I prefer to read paper but you can more easily search electronic.
Friday, July 8, 2016
TensorFlow
Following all the hype I have downloaded a copy of Google's TensorFlow software and ordered a book on the subject. Although TensorFlow lets you do linear regression, clustering, and create neural networks (for example) I have not found anything that you can do with TensorFlow that I can't already do with my other software packages.
Wednesday, July 6, 2016
Semantic computing
Using the sensors and actuators described in my blog of 1 Oct. 2015 my artificial intelligence A.s.a. H. Is able to do semantic computing, understanding the meaning of things like sound, color, temperature, acceleration, force, hard and soft, taste, etc.
Tuesday, July 5, 2016
Oppositional concepts
In structuralism (binary) opposition is seen as fundamental to human thought. See Oppositional Concepts in Computational Intelligence, by H. R. Tizhoosh, Springer, 2008, Positions, by Jacques Derrida, Univ. Chicago Press, 1982, and the works of Ferdinand de Saussure. The importance and use of oppositional concepts is discussed by various authors in Tizhoosh's book. When my artificial intelligence A.s.a. H. Includes a category's (concept's) logical complement it has this form. (see my paper in Trans. Kansas Acad. of Sci., vol. 109, # 3/4, 2006, equation 3, with the printing error corrected so that it reads Ci* = 1 - Ci = 1 - In.Ini) (You only need to store Ci, you can generate Ci* from it when needed.) Similarly, when using vector categories (as Asa H does) we can consider the negation of each vector.
Friday, July 1, 2016
How deep should deep learners be?
I have addressed before the question of how deep should our networks be. (See my blog of 14 March 2014 for example.) Asa H, artificial neural networks, semantic networks, or whatever. December of last year Microsoft (Peter Lee) reported work employing a network (ANN) of 152 layers. This seems excessive.
Features should be defined consistent with "carving up nature at the joints" (as with Plato's Phaedrus). Admittedly Asa H adds a few layers in the form of various preprocessors that it makes use of.
Features should be defined consistent with "carving up nature at the joints" (as with Plato's Phaedrus). Admittedly Asa H adds a few layers in the form of various preprocessors that it makes use of.
Tuesday, June 28, 2016
EV3 simulator
Things like the arduino simulator have been useful in separating the design and debugging of software from the design and debugging of robot hardware. EV3 simulators are now becoming available. I have just got the TRIK Studio 3.1.3 EV3 (and NXT 2.0) simulator up and running. The biggest issue with it is the fact that the documentation is in Russian.
Saturday, June 25, 2016
Observation versus experiment
I have emphasized the importance of AIs being able to act in/on the world. (For example in my blog of 23 March 2015) They need to be able to experiment as well as observe.
If one simply observes the operation of a Jacob's ladder two alternative theories of operation come to mind. Hot air rising between the electrodes may be lifting the arc or, alternatively, a Lorentz force may exist due to the current carried through the arc and the magnetic field produced by the arc and electrode loop. The Lorentz force would then propel the arc upward.
As an experimenter one can simply tip the apparatus on its side and see that the arc no longer travels alone the electrodes, the hot air explanation is the better one. The agent/experimenter's ability to act on the system being studied is quite helpful.
If one simply observes the operation of a Jacob's ladder two alternative theories of operation come to mind. Hot air rising between the electrodes may be lifting the arc or, alternatively, a Lorentz force may exist due to the current carried through the arc and the magnetic field produced by the arc and electrode loop. The Lorentz force would then propel the arc upward.
As an experimenter one can simply tip the apparatus on its side and see that the arc no longer travels alone the electrodes, the hot air explanation is the better one. The agent/experimenter's ability to act on the system being studied is quite helpful.
Thursday, June 23, 2016
Walking
Because of the complexity and expense involved I have avoided humanoid robots (blog of 18 Feb. 2016). But to give Asa H the concept of "walk"/"walking" I have located a few sensors on a small Lego NXT walker.
Tuesday, June 21, 2016
Student projects
How do you involve students in your research? You have to find a manageable chunk that you believe will lead to a clear and useful (publishable?) result in a reasonable length of time. For a Ph.D. This needs to be doable in a couple years and it needs to be original research. For an MSc perhaps you have only about a year. For an undergrad project, just a few months.
I sure couldn't put students on my project looking for concepts with which to reconceptualize reality. With that work you might get a single concept, like non-Markovian models, after a year or more of effort. You can't just say, maybe we'll discover something interesting in a year or two.
As a consequence of capitalism business practices are being injected into every aspect of life. The idea of a masters thesis contract is a case in point. Science is not business. One can not predict what will be discovered or when.
I suspect this all distorts the scope and direction of scientific research.
I sure couldn't put students on my project looking for concepts with which to reconceptualize reality. With that work you might get a single concept, like non-Markovian models, after a year or more of effort. You can't just say, maybe we'll discover something interesting in a year or two.
As a consequence of capitalism business practices are being injected into every aspect of life. The idea of a masters thesis contract is a case in point. Science is not business. One can not predict what will be discovered or when.
I suspect this all distorts the scope and direction of scientific research.
Saturday, June 18, 2016
Ultimate reality, on various scales
We don't have direct access to ultimate reality, we have only our sense impressions, our sensitivity to light, sound, pressure, temperature, and certain chemicals. (Some other creatures have sensitivity to electric and magnetic fields.) Like Mariam Thalos (Without Hierarchy, OUP, 2013) I do not believe that ultimate reality is reducible solely to the microworld, be it strings, or branes, or quantum fields, or Hilbert spaces, ... The macroworld is also ultimate, be it a multiverse, higher dimensional, or whatever. There are also things like mind, thought, and consciousness which are patterns/processes. What these are patterns of, or processes in, is of less importance. Asa's thoughts can be patterns of activity in an electronic computer or in an optical computer. A computer, in turn, can be assembled out of matter in our world or equally well out of gliders in a universe like Conway's game of life.
Tuesday, June 14, 2016
Scientific pluralism, probability and statistics
There are a number of different theories of probability: objectivist, subjectivist, frequentist, Bayesian, etc. (see, for example, AI: a Modern Approach, 3rd edition, 2010, pg 491) Statistical inferences are then made based upon a variety of competing approaches, each with its own different strengths and weaknesses. (See, for example, S. N. Goodman, Science, vol. 352, 2016, pg 1180) In general one can not make claims based upon a single estimation of statistical significance, be it Fisher's P value, Bayes factors, or the like. Rather, one needs a pluralistic approach to value/assessment. (In a society of Asa H agents I have used various different value functions/networks. See, for example, with Asa H light, my blogs of 10 Feb. and 19 Feb. 2011)
Saturday, June 11, 2016
Subsymbolic?
Asa H can be taught names for the concepts it learns, for example:
Collision=(sense near, bump, decelerate) can be expanded to (taught):
Collision=(sense near, bump, decelerate, sound "collision")
Artificial neural networks, on the other hand, are frequently subsymbolic.
How many of the concepts (case vectors) that Asa learns should be named (symbolic)?
Going in the other direction Theodore Sider has suggested that complex linguistic entities be constructed as sequences or tree-structures of linguistic atoms (words). (Writing the Book of the World, OUP, 2011, page 295) This is exactly what Asa H creates (learns). We would certainly not want to assign names (words) to all of these larger scale case vectors. I.e., Vocabulary choice is required at this point.
Collision=(sense near, bump, decelerate) can be expanded to (taught):
Collision=(sense near, bump, decelerate, sound "collision")
Artificial neural networks, on the other hand, are frequently subsymbolic.
How many of the concepts (case vectors) that Asa learns should be named (symbolic)?
Going in the other direction Theodore Sider has suggested that complex linguistic entities be constructed as sequences or tree-structures of linguistic atoms (words). (Writing the Book of the World, OUP, 2011, page 295) This is exactly what Asa H creates (learns). We would certainly not want to assign names (words) to all of these larger scale case vectors. I.e., Vocabulary choice is required at this point.
Wednesday, June 8, 2016
Patom theory
John Ball's Patom theory (Speaking Artificial Intelligence, ComputerWorld, 2015) is quite similar to my A.s.a. H but has not yet been developed into code. When it has been I will be interested in seeing just what design choices have been made and how it performs.
Tuesday, June 7, 2016
Downward activation in Asa H
In Asa H predictions and output actions are the result of downward (backward, top-down) activation. They involve a flow of activation from upper levels in the hierarchical memory to lower levels. It is also possible for an active case to send activity downward to all of its vector components. This additional activity could then influence what other patterns (cases) might be or become active.
Friday, June 3, 2016
Lego NXT stigmergy
A swarm of Lego NXT colored brick (or beacon) seeking robots and brick (or beacon) dispensing robots can employ stigmergy to self-organize.
Wednesday, June 1, 2016
What is simple?
There are those who believe that our universe is not as simple as an empty one would be and that an explanation is needed as to why our universe is as complex as it is. There is more than one issue here but firstly, just what constitutes simplicity? Here are some possibilities:
1. What is more easily learned. (But by which learning algorithm(s)?)
2. Shortest. (But in what language? Which representation?)
3. Have the greatest symmetry. (In which geometry? And which object properties are to remain unchanged by the transforms?)
4. Information-theoretic measures. (Which regularities should be counted?)
More likely what people mean by simplicity is a vector quantity again, a cluster of components. (Just like the concepts Asa H learns. For Asa most concepts are vectors.) And people won't even agree on those components.
1. What is more easily learned. (But by which learning algorithm(s)?)
2. Shortest. (But in what language? Which representation?)
3. Have the greatest symmetry. (In which geometry? And which object properties are to remain unchanged by the transforms?)
4. Information-theoretic measures. (Which regularities should be counted?)
More likely what people mean by simplicity is a vector quantity again, a cluster of components. (Just like the concepts Asa H learns. For Asa most concepts are vectors.) And people won't even agree on those components.
Saturday, May 28, 2016
What constitutes an explanation?
What should an artificial intelligence like A.s.a. be able to offer us in support of its conclusions or recommendations? What can I offer students struggling to understand difficult subject matter? (Like quantum mechanics and relativity) What should we expect from Asa, what can students expect from us?
Explanations come in various kinds and at different levels of detail. There are formal-logical, ontological, pragmatic explanations, and explanations in terms of inheritance.
Formal-logical explanations might be in the form of covering-laws, be statistical, or unificationist in nature. For example, a logically valid deductive chain of propositions starting from a set of initial conditions may lead to (explain) a conclusion.
Ontological explanations may have an event explained by another event. A fire may be the explanation for the presence of a cloud of smoke.
A pragmatic explanation might explain a plane crash in terms of pilot error or a fist fight in terms of drunkenness.
Property inheritance may serve to explain something. Socrates died because all human beings die and Socrates was a human being.
Level of detail is important. We would explain a decline in the stock market in terms of recent national news events but not in terms of the interaction of quarks and gluons. We might be satisfied to know that smoking caused a case of lung cancer or we might want the detailed biochemical mechanisms involved.
Wednesday, May 25, 2016
Patterns of or patterns in
While considering the possibility of different degrees of nothingness Robert Lawrence Kuhn suggested that one might have no physical reality but still have a Platonic world of mathematical entities existing, things like numbers, equations, shapes, etc. (Closer to Truth, PBS, 24 May 2016) If mathematics is simply the science of patterns this doesn't seem likely. (Resnik, Mathematics as a Science of Patterns, OUP, 1997) You need to have something physical that you can then have patterns of. Patterns don't exist if the underlying fabric doesn't exist. (Synthetic or extrapolated patterns need not exist in the real world however. See my blog of 18 Feb. 2016 for examples.)
Friday, May 20, 2016
Asa's knowledge of the world
"Knowledge is a particular way of being connected to the world, having a specific real factual connection to the world: tracking it." (Nozick, Philosophical Explanations, Harvard University Press, 1981) This is just what Asa H does.
Thursday, May 19, 2016
AI education curriculum, output sequences
Asa H typically deals with far more inputs then outputs. Typical output primitives are like those listed in my 1 April 2013 blog. A curriculum for learning more complex output sequences might then be:
1. learn to go fetch an object (perhaps identifiable by color or a beacon or gps location)
2. learn to grasp, lift, and carry an object (perhaps to some target location)
3. learn to group a number of objects together (at some location)
4. learn an orderly grouping of objects, one at a time, counting as you go
5. remove objects from a group, one at a time, counting as you go
In some cases odometry is useful. 1 through 3 have been done with Lego robots. All 5 have been done in simulations.
You can't directly transfer what's been learned to new (significantly different) robots and environments.
1. learn to go fetch an object (perhaps identifiable by color or a beacon or gps location)
2. learn to grasp, lift, and carry an object (perhaps to some target location)
3. learn to group a number of objects together (at some location)
4. learn an orderly grouping of objects, one at a time, counting as you go
5. remove objects from a group, one at a time, counting as you go
In some cases odometry is useful. 1 through 3 have been done with Lego robots. All 5 have been done in simulations.
You can't directly transfer what's been learned to new (significantly different) robots and environments.
Wednesday, May 18, 2016
Aristotle's 10 categories as Asa H concepts
In the Organon Aristotle claims to enumerate all of the possible kinds of things that can be the subject or predicate of a proposition.
1. Substance: This category would include as components all of the objects Asa learns to identify.
2. Quantity: This would include as vector components distances, weights, and durations that Asa observes.
3. Quality: Asa's colors, tastes, temperature, etc.
4. Relative: knowledge, size, near, far
5 Where: gps and beacon locations
6. When: time, date
7. Attitude: rise, sit, lie
8. Having: a load for example
9. Doing: move, push, carry, lift
10 Being affected: to feel force, acceleration, damage
1. Substance: This category would include as components all of the objects Asa learns to identify.
2. Quantity: This would include as vector components distances, weights, and durations that Asa observes.
3. Quality: Asa's colors, tastes, temperature, etc.
4. Relative: knowledge, size, near, far
5 Where: gps and beacon locations
6. When: time, date
7. Attitude: rise, sit, lie
8. Having: a load for example
9. Doing: move, push, carry, lift
10 Being affected: to feel force, acceleration, damage
Tuesday, May 17, 2016
Asa's gestalt
"Gestalt psychology is a theory of sensation which suggests that we are primarily aware of organized wholes of our environment and not of the irreducible elements into which these wholes might in theory be analyzed." (The Blackwell Dictionary of Western Philosophy, Blackwell, 2004, pg 282)
Asa H 2.0 light, for instance, reacts to a group of NM input features at a time. (see my blogs of 10 Feb. 2011 and 14 May 2012) In the case of visual (image) input NM can be quite large.
Asa H 2.0 light, for instance, reacts to a group of NM input features at a time. (see my blogs of 10 Feb. 2011 and 14 May 2012) In the case of visual (image) input NM can be quite large.
Friday, May 13, 2016
More cognitive primitives
Lakoff and Nunez believe that human style reasoning requires:
prototypes, sets of common properties or examples
image schemas for spatial relations, things like an above concept, contact or touch concept, a support concept, a container concept with inside, outside, and boundary, a motion, path, goal schema, in, out, to, and from schemas
conceptual frames, the knowledge of sets of different steps/processes that achieve the same result
metaphor, perhaps as common correlations and polysemy
blends, the combining of several concepts
(see Where Mathematics Comes From, Basic Books, 2000, page 121)
I want to try to give my artificial intelligence A.s.a. H. these capabilities in order to allow it to do more humanlike reasoning. (Asa currently has a number of them of course.)
prototypes, sets of common properties or examples
image schemas for spatial relations, things like an above concept, contact or touch concept, a support concept, a container concept with inside, outside, and boundary, a motion, path, goal schema, in, out, to, and from schemas
conceptual frames, the knowledge of sets of different steps/processes that achieve the same result
metaphor, perhaps as common correlations and polysemy
blends, the combining of several concepts
(see Where Mathematics Comes From, Basic Books, 2000, page 121)
I want to try to give my artificial intelligence A.s.a. H. these capabilities in order to allow it to do more humanlike reasoning. (Asa currently has a number of them of course.)
Mathematical platonism
While I agree with something like Cockshott, et. al. and Lakoff and Nunoz's accounts of the origins of mathematical concepts (see Computation and its Limits, Oxford University Press, 2012, chapter 2 and Where Mathematics Comes From, Basic Books, 2000) I do believe that mathematical entities exist fully independent of the human mind or life. I believe that mathematics is, among other things, a science of patterns, and patterns exist. (see M. D. Resnik, Mathematics as a Science of Patterns, Oxford University Press, 1997) We can be talking about patterns of anything, patterns in our sensory inputs if nothing else. (Asa H builds concepts from these.) If anything exists then patterns may exist in it.
Tuesday, May 10, 2016
Mathematical pluralism
Scientific pluralism should apply to mathematics too. There is not one kind of geometry, there are many: Euclidean, non-Euclidean, projective, inversive, differential... There is not one set theory, there are many, each defined by different axioms. There is not one formal logic, there are many: propositional logic, predicate logic, spatial logic, temporal logic, second order logic, ... There is not one system of mathematics, there are many. (See Where Mathematics Comes From, Lakoff and Nunez, Basic Books, 2000 and Complementarity in Mathematics, Kuyk, Reidel Publishing, 1977)
Monday, May 9, 2016
Sunday, May 8, 2016
Asa H is sentient
My artificial intelligence A.s.a. H. is sentient according to the Merriam-Webster's definitions:
Able to feel, see, hear, smell, or taste
Responsive to or conscious of sense impressions
Aware
Finely sensitive in perception or feeling
Asa H does all of these when interfaced with Lego NXT/EV3 robots. It is also consistent with Clark's detailed theory of sentience (Austen Clark, A Theory of Sentience, Oxford University Press, 2000).
Able to feel, see, hear, smell, or taste
Responsive to or conscious of sense impressions
Aware
Finely sensitive in perception or feeling
Asa H does all of these when interfaced with Lego NXT/EV3 robots. It is also consistent with Clark's detailed theory of sentience (Austen Clark, A Theory of Sentience, Oxford University Press, 2000).
Asa H's concepts
Asa has occasionally formed concepts involving retrocausation and top-down causation. I have experimented a bit with ways of blocking/discouraging these, or other concepts. Alternatively we can keep them, reconceptualizing reality.
Simple parallel computing again
Neural networks are especially hard/slow to train. Rather than training a single network having m inputs and n outputs one can instead train, in parallel, on n computers, n networks each having the m inputs but each having only one of the outputs.
Saturday, May 7, 2016
Asa clustering
Asa H consists of a hierarchical memory assembled out of clustering modules and feature detectors. In typical Asa H light implementations (see my blogs of 10 Feb 2011 and 14 May 2012) clustering is performed on vectors having both input and output components. One can add some discrimination by also clustering based on inputs alone and outputs alone. Usually there are far more inputs than outputs however.
Semantic engine
Floridi has proposed a two-machine metaprogramming architecture to solve the symbol grounding problem and give an artificial agent semantics. (Luciano Floridi, The Philosophy of Information, Oxford University Press, 2011, pages 166-171) There looks to be some similarity to Asa H and to my older A.s.a. F. architecture. Floridi assumes a society of such agents inter communicating with each other, again as with Asa F and Asa H. I think I may code up a minimal implementation of his two-machine agent and try it out, in keeping with scientific pluralism. I want to see if it does anything that Asa H doesn't do.
Friday, May 6, 2016
Some history
In my "X files" I have a copy of Frank Tipler's book The Physics of Immortality (Doubleday, 1994). On page 192 Tipler says "If the mathematics required to describe reality is sufficiently simple, then Gödel's Theorem will not apply..." I had had this same idea in the late 1960s. Perhaps the ideal theoretical physics would not be subject to Gödel's limitations, something along the lines of Presburger arithmetic or propositional logic. I had been accepted for graduate study at New York University (the Courant Institute in lower Manhattan) and had been assigned a faculty advisor. I asked that I be allowed to take a math course on formal logic along with my physics courses. When this was refused I decided not to attend NYU and my career went off in a different direction. The rest, as they say, is history.
Asa H learns/discovers sticky/tacky
With the added ability to measure pull as well as push forces Asa H has learned the concept:
sticky/tacky = (gripper motor close, feel push force, gripper motor open, feel pull force)
as contrasted with:
non stick = (gripper motor close, feel push force, gripper motor open)
The availability of sensors influences what concepts will be acquired/defined.
sticky/tacky = (gripper motor close, feel push force, gripper motor open, feel pull force)
as contrasted with:
non stick = (gripper motor close, feel push force, gripper motor open)
The availability of sensors influences what concepts will be acquired/defined.
Tuesday, May 3, 2016
Asa H, being conscious of
Bertrand Russell said "Consciousness: a person is said to be conscious of a circumstance when he uses words, or images of words, to others or to himself, to assert the circumstance." (Collected Papers of Bertrand Russell, vol. IX) My artificial intelligence Asa H did just that, for instance, when it suffered a collision and reported it to me in natural language, "collision." See chapter 1 of my book, Twelve Papers, www.robert-w-jones.com, Book.
Friday, April 29, 2016
Attention
Over the several decades that I have been doing AI research the biggest problems have been the related issues of control of complexity and focus of attention. Can my current AI, Asa H, be taught what it should attend to? Can the most important cases in each level in the memory hierarchy be taught sufficiently high utilities and, most importantly, will this exhibit the required dependence on context? Does having vector utility help? The utility of a given case (pattern/concept) depends upon the context. The problem of control of attention often times didn't arise in simpler domains but is important for operation in the real world.
Examples of vector utility
If you buy a resistor its usefulness in a given electronic circuit depends upon, at least, its electrical resistance, R, and the power it can withstand, P. It has a vector utility of at least U = (R, P). The utility might also depend upon the dollar cost of the component and its physical size as well.
If you buy a capacitor its usefulness depends upon, at least, its capacitance, C, and the maximum voltage it can tolerate, V. Its vector utility is then, at least, U = (C, V). No single scalar utility can be assigned to the capacitor unless you can specify its application. If the capacitor is to store charge, say as a memory cell, then perhaps a suitable scalar utility would be U = Q = CV. On the other hand, if the capacitor is intended to store energy, say in a capacitor bank, then perhaps a suitable scalar utility would be U = E = .5 C VV. If you wish to store charge while using a minimum energy then perhaps U = Q/E = 2/V. A suitable scalar utility depends upon the context at the moment. A general utility needs to be a vector. (See chapter 2 of my book, Twelve Papers, www.robert-w-jones.com , book.)
If you buy a capacitor its usefulness depends upon, at least, its capacitance, C, and the maximum voltage it can tolerate, V. Its vector utility is then, at least, U = (C, V). No single scalar utility can be assigned to the capacitor unless you can specify its application. If the capacitor is to store charge, say as a memory cell, then perhaps a suitable scalar utility would be U = Q = CV. On the other hand, if the capacitor is intended to store energy, say in a capacitor bank, then perhaps a suitable scalar utility would be U = E = .5 C VV. If you wish to store charge while using a minimum energy then perhaps U = Q/E = 2/V. A suitable scalar utility depends upon the context at the moment. A general utility needs to be a vector. (See chapter 2 of my book, Twelve Papers, www.robert-w-jones.com , book.)
Monday, April 25, 2016
Is the universe a computer?
"What is computation?" was the subject of a 2010 ACM symposium and "what is the nature of reality?", the universe, is an open question in philosophy. In my blog of 28 Aug. 2012 I suggested that a computer might best be described as a reconfigurable causal network. In the classical physics clockwork view of reality the universe certainly is a causal network. The problem is its not all that reconfigurable. The big bang did the configuring and that's hard to change. Perhaps equate a multiverse with a computer then. Provided there are a variety of initial conditions.
If you mean by "the universe" only that portion of reality which is external to yourself then your actions might do a tiny bit of reconfiguring, but not much. You plus the universe would be a coupled pair of computers/Turing machines. I run some Asa H simulations in this way. (See my blog of 10 March 2016.)
One can generalize these notions to a quantum computer and a quantum mechanical reality/universe.
If you mean by "the universe" only that portion of reality which is external to yourself then your actions might do a tiny bit of reconfiguring, but not much. You plus the universe would be a coupled pair of computers/Turing machines. I run some Asa H simulations in this way. (See my blog of 10 March 2016.)
One can generalize these notions to a quantum computer and a quantum mechanical reality/universe.
Saturday, April 23, 2016
Reconceptualizing reality
As evolution changes the uses of things, both physical things and mental things, it also necessarily changes their meanings. So ontologies must change over time. Usually these are small and gradual changes. Occasionally they are the more profound changes I have been thinking about like the transition from classical physics to quantum mechanics or relativity theory.
Friday, April 22, 2016
It From Bit?
If I am serious about the possibility of reconceptualizing reality I should be willing to at least consider John Wheeler's It from Bit postulate. To do this I am reading Aguirre and Foster's It From Bit or Bit from It, Springer, 2015.
Actually, the history of a computational reality notion dates back to Konrad Zuse. Zuse called it the computing universe. See, The Computer - My Life, Springer-Verlag, 1993, pg 175.
If all we really experience is Hume's bundle of perceptions or Lewis' observational terms it might make sense to ground our ontology on information. One could stop there and call it idealism?
Thalos has attacked the idea that we should be trying to describe everything on some one single lowest level of reality (See, Without Hierarchy, M. Thalos, Oxford Univ. Press, 2013). Others suggest there is not one single "correct" or "best" ontology. (See, The Logic of Reliable Inquiry, K. Kelly, Oxford Univ. Press, 1996 and Constructing the World, D. Chalmers, Oxford Univ. Press, 2012) This is in line with scientific pluralism.
Asa H's ontology seems to evolve over time and depend upon the order in which it receives its experiences.
What is the "best" ontology will depend on what I'm using it for. Quantum fields might be the best description of ultimate reality but they would not be the best vocabulary with which to talk with friends in daily life. They might not be the best basis for a private language or language of thought either. This was a reason for adopting scientific pluralism. Even in a single branch of science one might find use for several different "languages." The Heisenberg picture of quantum mechanics versus the Schrodinger picture for example. Are wave functions a function of time? Are operators a function of time?
It would seem that we need several different (but possibly overlapping) ontologies. And I expect these to change over time.
Actually, the history of a computational reality notion dates back to Konrad Zuse. Zuse called it the computing universe. See, The Computer - My Life, Springer-Verlag, 1993, pg 175.
If all we really experience is Hume's bundle of perceptions or Lewis' observational terms it might make sense to ground our ontology on information. One could stop there and call it idealism?
Thalos has attacked the idea that we should be trying to describe everything on some one single lowest level of reality (See, Without Hierarchy, M. Thalos, Oxford Univ. Press, 2013). Others suggest there is not one single "correct" or "best" ontology. (See, The Logic of Reliable Inquiry, K. Kelly, Oxford Univ. Press, 1996 and Constructing the World, D. Chalmers, Oxford Univ. Press, 2012) This is in line with scientific pluralism.
Asa H's ontology seems to evolve over time and depend upon the order in which it receives its experiences.
What is the "best" ontology will depend on what I'm using it for. Quantum fields might be the best description of ultimate reality but they would not be the best vocabulary with which to talk with friends in daily life. They might not be the best basis for a private language or language of thought either. This was a reason for adopting scientific pluralism. Even in a single branch of science one might find use for several different "languages." The Heisenberg picture of quantum mechanics versus the Schrodinger picture for example. Are wave functions a function of time? Are operators a function of time?
It would seem that we need several different (but possibly overlapping) ontologies. And I expect these to change over time.
Thursday, April 21, 2016
Actions
Is all that we can "directly know" just our sense impressions (input signals)? That is, Lewis' "observational terms", O-terms. Hume said "...only the successive perceptions constitute the mind." (A Treatise on Human Nature, 1739) Asa H does learn just such sequences of sensations. But for Asa H as well as for humans there are also actions (output signals). And these are important for optimization/intelligence. Simple observation versus full blown experiment. In Asa H extrapolations, deductions, etc. serve as hypotheses for future testing and correcting.
Wednesday, April 20, 2016
Complexity and diversity
I am reading the book Complexity and the Arrow of Time by Lineweaver, et al. (Cambridge Univ. Press, 2013) Perhaps the problem is that complexity and diversity are again simply the names of vector quantities, quantities having components like: genome length, the number of cell types, the number of niches, the number of species, the specialization of body parts, the number of component functions, etc. (If these things can be measured.)
Asa H and associationism
At a conference a few weeks ago a colleague suggested to me that Asa was a software implementation of Hume's associationism. While I agreed in part I did point out that a lot more was also going on in Asa H besides association. See, for example, my website www.robert-w-jones.com, cognitive scientist, theory of thought and mind. I referred him instead to John H. Andreae's work.
But the Asa H experiments ARE relevant to philosophy (as described in many of my blogs). According to David Hume the "self" is "...a bundle or collection of different perceptions which succeed each other..." This is exactly what Asa's concept of its self is as I've described while it's been evolving.
But the Asa H experiments ARE relevant to philosophy (as described in many of my blogs). According to David Hume the "self" is "...a bundle or collection of different perceptions which succeed each other..." This is exactly what Asa's concept of its self is as I've described while it's been evolving.
Saturday, April 16, 2016
Asa H discovers mind-body dualism
My artificial intelligence Asa H has formed a concept composed of the actions it is able to perform. This concept is, in turn, composed out of what might be termed a concept of mental action and a separate concept of physical action. The concept physical actions is composed of things like moving, turning, grasping, lifting, etc., all actions that require substantial current draws on the NXT/EV3 batteries. The concept mental actions is composed of things like extrapolation from the case base, searching through the case base, loading a data (case) file, sorting a file, etc., all actions that involve no large added current drains. (To increase its utility measures Asa prefers to take actions that do not require a large current drain.) Asa H makes this distinction between the mental and the physical.
Friday, April 15, 2016
Consciousness in Asa H
Typical Asa H light software (see my blogs of 10 Feb. 2011 and 14 May 2012) allows for simple adjustments to learning by setting parameters like L and skip. More complex software packages allow Asa to observe the amount of time it spends taking input, giving output, searching the case base, performing feature extraction, adding to memory, sorting memory, comparing, extrapolating, doing deduction, doing simulation, case updating, etc. and then correlate these efforts with the utility (rewards) observed/received over time (see chapter 0ne of my book Twelve Papers, the section titled self monitoring www.robert-w-jones.com). Parameters like L and skip are, themselves, made inputs to the hierarchical memory and Asa learns a vector/concept like:
thought = (search, deduction, simulation, sorting, extrapolating, comparing, remembering, etc.).
Asa can be allowed to adjust the learning itself by making the parameters outputs of the memory hierarchy. Thinking can come to constitute a part of Asa's concept of its self:
self = (sense, act, health, thought).
This is a further evolution of Asa's self concept. Asa can observe some of its own thought processes.
In interaction with the world I have tried to give Asa the same sort of sensations and behaviors that a human might experience. If Wittgenstein is right this might be necessary if humans and AIs are to understand each other. But what Asa sees of its own thought processes is quite different from what humans know of their own inner thoughts. Will this prove to be a problem? Might the same thing be true if we met space aliens?
thought = (search, deduction, simulation, sorting, extrapolating, comparing, remembering, etc.).
Asa can be allowed to adjust the learning itself by making the parameters outputs of the memory hierarchy. Thinking can come to constitute a part of Asa's concept of its self:
self = (sense, act, health, thought).
This is a further evolution of Asa's self concept. Asa can observe some of its own thought processes.
In interaction with the world I have tried to give Asa the same sort of sensations and behaviors that a human might experience. If Wittgenstein is right this might be necessary if humans and AIs are to understand each other. But what Asa sees of its own thought processes is quite different from what humans know of their own inner thoughts. Will this prove to be a problem? Might the same thing be true if we met space aliens?
Wednesday, April 13, 2016
One, two, three and four dimensional memories for Asa H
In my blog of 7 Jan. 2015 I describe how to give my artificial intelligence Asa H a one dimensional memory for things like recorded speech and a two dimensional memory for things like images. With Asa H now controlling a distributed set of Lego NXT and Ev3 robots it is also possible to establish a three dimensional memory with the agents distributed about in 3 space. Since this is recorded as a function of time it is a four dimensional pattern in memory.
This hardware and software configuration quickly forms the "action-at-a-distance" concept as it learns.
This hardware and software configuration quickly forms the "action-at-a-distance" concept as it learns.
Tuesday, April 12, 2016
There is not one single fundamental level of reality
Some empiricists might tend to believe that there is a single most fundamental level of reality be it strings, or quantum fields, or what have you. Schaffer has argued against this view (Nous, 37:3, pg 498, 2003). Concepts like Chalmers' indexical I (Constructing the World, Oxford U. Press, 2012, pg 390) or Wierzbicka's substantive I would correspond to the self concept that my AI Asa H is learning (see blogs of 5 Dec. 2015 and 4 March 2015). This concept resides on a fairly high level in the Asa H case memory hierarchy. Other concepts like Chalmers' quality color or Wierzbicka's touch or hear reside at much lower levels in Asa's hierarchical memory. Furthermore, I do not think we need to believe equally strongly in all of our concepts, even our most fundamental ones. (Scientific pluralism again.)
Friday, April 8, 2016
Seaking ultimate O-terms
In his Aufbau project Carnap argued that all concepts could be constructed from a similarity relation and a few logical concepts (like AND, OR, NOT). These would be, in effect, Lewis' ultimate O-terms. Asa H has (one or more) similarity measures and NOT built in (innate) and can learn sequences that implement AND or OR.
In addition to Anna Wierzbicka's 63 semantic primes Locke offered 8 ultimate conceptual primitives: extension, solidity, mobility,perception, motivity, existence, duration, and number (Essay Concerning Human Understanding, book2, chapter 21, 1690). But some of these can be decomposed into other primitive concepts (more primitive ones?). Solidity can be learned as a sequence of actions involving touch, force or pressure application, and observed degree of deformation/deflection. Mobility can be learned as a sequence of detecting, touching, grasping, lifting, and carrying. (Note that outputs, actions, as well as inputs, sensations, are involved.)
In this way we can try to find a more primitive (most primitive?) set of O-terms. Or, by recombining the basic conceptual elements (subelements?) could we hope to operationalize our project to reconceptualize reality? Physicists, for example, might wish to combine Locke's perception and existence into one single concept. The idea being that what exists is whatever can be detected/measured by sensors/instruments.
Empiricists like Lawrence Barsalou and Jesse Prinz believe that all the most primitive concepts are acquired by direct perception alone. (Furnishing the Mind, Bradford Book, 2004) In that case intelligences with the same senses might expect to share the same (or at least similar) fundamental concepts. We might expect Asa H to reconceptualize reality due to its ability to perform radio ESP, echolocation and ranging and to directly sense GPS location, electric fields, magnetic fields, atmospheric pressure, and nuclear radiation.
In addition to Anna Wierzbicka's 63 semantic primes Locke offered 8 ultimate conceptual primitives: extension, solidity, mobility,perception, motivity, existence, duration, and number (Essay Concerning Human Understanding, book2, chapter 21, 1690). But some of these can be decomposed into other primitive concepts (more primitive ones?). Solidity can be learned as a sequence of actions involving touch, force or pressure application, and observed degree of deformation/deflection. Mobility can be learned as a sequence of detecting, touching, grasping, lifting, and carrying. (Note that outputs, actions, as well as inputs, sensations, are involved.)
In this way we can try to find a more primitive (most primitive?) set of O-terms. Or, by recombining the basic conceptual elements (subelements?) could we hope to operationalize our project to reconceptualize reality? Physicists, for example, might wish to combine Locke's perception and existence into one single concept. The idea being that what exists is whatever can be detected/measured by sensors/instruments.
Empiricists like Lawrence Barsalou and Jesse Prinz believe that all the most primitive concepts are acquired by direct perception alone. (Furnishing the Mind, Bradford Book, 2004) In that case intelligences with the same senses might expect to share the same (or at least similar) fundamental concepts. We might expect Asa H to reconceptualize reality due to its ability to perform radio ESP, echolocation and ranging and to directly sense GPS location, electric fields, magnetic fields, atmospheric pressure, and nuclear radiation.
Monday, April 4, 2016
Impact factors and citation analysis
I was talking with a college administrator at a conference this weekend and the discussion turned to measuring research quality. While I don't think they are totally useless I do believe that, like student evaluations of instruction, citation analysis of scientific research is largely a popularity contest.
I want the truth of a scientific idea to be measured by its agreement with observation and experiment. I don't believe science is democratic. Human opinion can not be the deciding factor.
What about the most difficult of fields where only a handful of people are even capable of understanding the concepts involved? (Feynman said that nobody understands quantum mechanics.) There won't be many papers to cite or readers to have read them. Popular is not the same as good.
I want the truth of a scientific idea to be measured by its agreement with observation and experiment. I don't believe science is democratic. Human opinion can not be the deciding factor.
What about the most difficult of fields where only a handful of people are even capable of understanding the concepts involved? (Feynman said that nobody understands quantum mechanics.) There won't be many papers to cite or readers to have read them. Popular is not the same as good.
Thursday, March 31, 2016
Embodiment and successful human-computer communication
Wittgenstein argued that it was "Shared human behavior....by means of which we interpret an unknown language." (Philosophical Investigations, 1953, Blackwell, 2001). If this is true then for an AI and a human to communicate and understand each other they need to share the same sets of behaviors. This is also what's needed to define the most compact and most primitive set of basic concepts and vocabulary. Again, with Asa H I am trying to accomplish this with the kind of embodiment described in my blogs of 1 Oct. and 5 Nov. 2015. (Also see a beginning of this in the protolanguage section of chapter 1 of my book Twelve Papers, www.robert-w-jones.com.)
A robot's pain
I have placed small aluminum foil tabs on various lego bricks in my lego NXT robot. When the bricks are properly seated the tabs make good contact. When two bricks pull sufficiently far apart the tabs lose electrical contact and this signals a pain to the Asa H brain. This sort of pain sensor allows for the robot's action or an external agent to reseat the bricks and cure the pain. Asa can learn which pains are correlated with any given robot malfunctioning.
Monday, March 28, 2016
Limited vocabulary for Asa H
If there can be a limited (minimal) vocabulary and a small number of primitive terms and concepts (in the sense of Ramsey, Carnap, Lewis, the Canberra plan, and Carnap's aufbau project) then I would want to be sure to ground each of those concepts using the methods I described in my blogs of 1 Oct. 2015 and 5 Nov. 2015. (The set of primitive concepts may not be unique, of course.) Not all of the concepts need be defined on the same level in the case hierarchy. They may reside in different languages.
Thursday, March 24, 2016
Private language? More experimental philosophy
As my artificial intelligence Asa H learns spatial temporal patterns from the world it collates these observations into concepts of various degrees of abstraction. I.e., it learns a hierarchically organized vocabulary/language (or series of vocabularies/languages) with which it then describes/understands the world it lives (acts) in. If Wittgenstein , Dewey, and Quine are right no private language is possible and it should be possible for me to decode all of Asa's casebases and translate them into some human understandable natural language. (The translation process might be very difficult, however.) I have been successful at some of this as reported in my publications and in this blog over the years. But there have also been portions of Asa's casebase that I have not been able to translate, and then still other bits that I have found that I have gotten wrong.
It is also true that if one starts with 2 identical AIs and train both on identically the same input examples (but presented in different orders) one can develop different concepts (internal vocabularies) in the 2 resulting minds. Various machine learning algorithms do this.
Kelly has suggested conditions under which "minor differences in the order in which they receive the data may lead to different inductive conclusions in the short run. These distinct conclusions cause a divergence of meaning between the two scientists..." (The Logic of Reliable Inquiry, Oxford U. Press, 1996, Pg 381-382) And "two logically reliable scientists can stabilize in the limit to theories that appear to each scientist to contradict one another." (Pg. 383) "nothing in what follows presupposes meaning invariance or intertranslatability." (Pg. 384) Perhaps neither could then understand (or translate) the other's private language (concepts/vocabulary/ontology).
Clearly, this is also related to scientific pluralism, the idea of reconceptualizing reality, and the possibility of having alternate realities.
It is also true that if one starts with 2 identical AIs and train both on identically the same input examples (but presented in different orders) one can develop different concepts (internal vocabularies) in the 2 resulting minds. Various machine learning algorithms do this.
Kelly has suggested conditions under which "minor differences in the order in which they receive the data may lead to different inductive conclusions in the short run. These distinct conclusions cause a divergence of meaning between the two scientists..." (The Logic of Reliable Inquiry, Oxford U. Press, 1996, Pg 381-382) And "two logically reliable scientists can stabilize in the limit to theories that appear to each scientist to contradict one another." (Pg. 383) "nothing in what follows presupposes meaning invariance or intertranslatability." (Pg. 384) Perhaps neither could then understand (or translate) the other's private language (concepts/vocabulary/ontology).
Clearly, this is also related to scientific pluralism, the idea of reconceptualizing reality, and the possibility of having alternate realities.
Wednesday, March 23, 2016
Experimental philosophy and AI research
Being an advocate of scientific pluralism I may explore or make use of some viewpoint without expecting it to be universal. We may come to understand one way of thinking (or one mechanism of thinking) without believing that we understand all of the intricacies ("mechanisms") of thought.
My creativity machine experiments (Trans. Kansas Acad. Sci., vol. 102, pg 32, 1999) rewrote natural language as PROLOG or other code ("logical language" if you will) and then applied logic programming or something similar to deduce conclusions or make postulates. This can be thought of as a computer implementation of logical positivism. And, similarly, when my AI Asa H starts with Lego NXT sensor input only and learns more and more complex (and abstract) spatial temporal patterns with which it comes to understand (be able to act successfully in) its world.
Experiments with Asa H can also be thought of as work in experimental philosophy looking into the details of functionalism.
My creativity machine experiments (Trans. Kansas Acad. Sci., vol. 102, pg 32, 1999) rewrote natural language as PROLOG or other code ("logical language" if you will) and then applied logic programming or something similar to deduce conclusions or make postulates. This can be thought of as a computer implementation of logical positivism. And, similarly, when my AI Asa H starts with Lego NXT sensor input only and learns more and more complex (and abstract) spatial temporal patterns with which it comes to understand (be able to act successfully in) its world.
Experiments with Asa H can also be thought of as work in experimental philosophy looking into the details of functionalism.
Thursday, March 17, 2016
The many dimensions of vagueness in Asa H
My artificial intelligence Asa H incorporates vagueness in a number of ways. Clustering averages multiple spatial temporal observations to form a given concept. None of the individual observations are an exact match to the concept. Some similarity measure (or possibly several different similarity measures) compares an observed spatial temporal pattern with a known concept. Generalizations across the hierarchical memory organization are abstractions (vague). Time and spatial dilations constitute yet another source of vagueness. This all has implications for the philosophy of vagueness.
Friday, March 11, 2016
Problem lists
Laboratories, research groups, institutions should keep multiple "problem lists." One list may contain open questions in your field of research. In my case this might be questions like what is consciousness or how should we implement attention. A different list would contain items that we think should be improved upon. In my case, for example, I am doing certain things to control complexity but I may hope or believe that we might do better. Or I may be using some algorithm but believe that a better one may be possible.
In today's world most work is done by groups. (I am one of the few remaining lone wolves.) Some member of the group may have useful ideas about how to solve a problem faced by some other researcher. Several such outstanding problem lists should be maintained and reviewed periodically.
In today's world most work is done by groups. (I am one of the few remaining lone wolves.) Some member of the group may have useful ideas about how to solve a problem faced by some other researcher. Several such outstanding problem lists should be maintained and reviewed periodically.
Thursday, March 10, 2016
Doing more with simulators
Robot simulators are faster and more economical than real physical robots. Any simulation can be thought of as 2 coupled Turing machines, one representing the robot (i.e., Asa H software plus any pre and post processors) and the other representing the environment. (See my blogs of 7 Jan 2015 and 23 June 2015) I have been recording the environment's response during my embodied robot concept learning experiments, i.e., the work described in the blogs of 1 Oct. 2015 and 5 Nov. 2015. I can now use these recordings as the case base for a case-based reasoner which serves as a virtual robot's environment.
Sunday, March 6, 2016
R2-D2
Years (decades) ago I had a student who was inspired by the original Star Wars movie to get into S.T.E.M. She wanted to build R2-D2. (I have always preferred the robots from the film Silent Running and, before that, the robots from Asimov's book I, Robot and Jack Williamson's The Humanoids.) At that time we could not build R2-D2. Now, however, my LEGO NXT embodied Asa H directed robots can perform all of the functions/behaviors that R2 exhibited in the first film. (I have not seen some of the more recent Star Wars films.) Those functions are:
Mobile, wheeled but can step slightly (all terrain wheels and suspension)
Arms with grippers, manipulators, and/or fixturing
Accepts plug-in memory
Head rotates
Interfaces to computers
Communicates in a robot language
Reads and stores and replays data
Fire fighting
Speech recognition and voice control
A modified version of Blankenship's "Arlo" could do all this for a cost of perhaps $3000.
(But it was never clear to me that any of the Star Wars robots had sufficient capabilities so as to justify the cost of their construction and deployment.)
Elon Musk was similarly inspired to want to build the Millennium Falcon. Some fiction can serve as inspiration.
Mobile, wheeled but can step slightly (all terrain wheels and suspension)
Arms with grippers, manipulators, and/or fixturing
Accepts plug-in memory
Head rotates
Interfaces to computers
Communicates in a robot language
Reads and stores and replays data
Fire fighting
Speech recognition and voice control
A modified version of Blankenship's "Arlo" could do all this for a cost of perhaps $3000.
(But it was never clear to me that any of the Star Wars robots had sufficient capabilities so as to justify the cost of their construction and deployment.)
Elon Musk was similarly inspired to want to build the Millennium Falcon. Some fiction can serve as inspiration.
Thursday, March 3, 2016
Reconceptualizing reality
A number of the models that Asa H has created are non-Markovian. (Which is quite natural given the data structures Asa H uses. See my blog of 22 Nov. 2010) This is probably the most profound change that Asa has suggested so far. (See also my blog of 28 Feb. 2011)
Tuesday, March 1, 2016
Asa's subcategorization of its senses
For simplicity sake I have frequently mounted some of Asa H's sensors (fixed) on the PC. Sound sensors and smell (smoke) sensors for example, and sometimes fixed webcams. Other sensors must be carried along on Asa's mobile LEGO robots. Examples would be pain and force sensors and accelerometers and gyros. I have frequently had a third group of sensors which the mobile robots must grasp and carry to the location where they will be used. These are things like electric and magnetic field probes, thermometers, GM counters, pH and salinity (taste) probes, etc. When Asa is embodied in this particular fashion it forms three categories of senses.
Humans do not categorize their senses in this way. (Although one may have to bring his hand near a heat source in order to feel its warmth and will have to bring food or drink to its mouth to taste it.) I can prevent Asa from making these distinctions by mounting all of the various sensors on the mobile robots. This is more cumbersome for the robotic elements, however.
Again, for some work, like natural language understanding, I would like Asa to understand human concepts as closely as possible. For other projects, like attempting to reconceptualize reality, I am happy for Asa to form its own unique set of categories.
Humans do not categorize their senses in this way. (Although one may have to bring his hand near a heat source in order to feel its warmth and will have to bring food or drink to its mouth to taste it.) I can prevent Asa from making these distinctions by mounting all of the various sensors on the mobile robots. This is more cumbersome for the robotic elements, however.
Again, for some work, like natural language understanding, I would like Asa to understand human concepts as closely as possible. For other projects, like attempting to reconceptualize reality, I am happy for Asa to form its own unique set of categories.
Monday, February 29, 2016
Student study guides
There is a good blog post on study guides and student attitudes on the 25 Feb. 2016 angrybychoice.fieldofscience.com blog. I resist giving study guides. I want students to do more not less. I want students to learn more not less. I'm sure this hurts my student evaluations.
Tuesday, February 23, 2016
A concept of triggered recurrence
As we experience the world, we observe, record, and refine spatial-temporal patterns at various levels of abstraction ("concepts"). We then use these to describe/explain new observations/patterns. I am interested in finding new concepts with which to describe reality. (see my blogs of 21 Jan. 2016 and 10 Jan. 2016 for example) Fermi-Pasta-Ulam-Tsingou recurrence is related to an interesting concept Asa H has developed. The pattern of activity described as "a volunteer fire brigade" is triggered by and follows the occurrence of a fire. Asa H has developed the concept of a spatial-temporal pattern that recurs but only when triggered by the proper circumstances (rather than at some regular time interval).
Some of my colleagues think that I have given up physics altogether. But I am trying to find better ways to describe reality. This is physics at its most basic level. Maybe there is not mass, not particles, not waves, not quantum fields but rather.......................
Some of my colleagues think that I have given up physics altogether. But I am trying to find better ways to describe reality. This is physics at its most basic level. Maybe there is not mass, not particles, not waves, not quantum fields but rather.......................
Friday, February 19, 2016
word2vec
Google is making a big deal of being able to take vector Paris, subtract from it vector France, and then add on vector Italy and get as an answer vector Rome. (for example, AI Weekly, 11 Feb. 2016)
Chapter 5 of my book Twelve Papers (www.robert-w-jones.com, book) pg 56-57 does the same thing with vector kitten - vector cat + vector dog = vector puppy. My example was (intentionally) a low dimensional toy example but worked in the same way.
Chapter 5 of my book Twelve Papers (www.robert-w-jones.com, book) pg 56-57 does the same thing with vector kitten - vector cat + vector dog = vector puppy. My example was (intentionally) a low dimensional toy example but worked in the same way.
Thursday, February 18, 2016
B.E.A.M. Robotics
In my small robotics lab I design and build robots that define and ground concepts for my artificial intelligence Asa H. (See my blog of 1 Oct. 2015) I also have a half dozen small BEAM robots. Is there any way they can be useful to Asa? Asa could operate on top of a set of BEAM reflexes or BEAM elements might be used in place of other pre or post processors. A light seeker could be used with my solar power panels. A beacon seeker could direct a robot to a recharging station/"food". Cliff avoidance would be a useful reflex/fear. A righting reflex might be useful for some robots. A thermophobic might keep itself from overheating. I could use a BEAM element for wall following.......
Humanoid robotics
I want my artificial intelligence Asa H to understand enough of the concepts (words) that humans use so that we can communicate with one another and understand one another. This wont be a perfect correspondence but then it isn't between individual humans either. So far I have tried to give Asa any of these needed concepts in the simplest (and cheapest) way that I can. My blogs of 1 Oct. 2015 and 5 Nov. 2015 explain how I've done this with several mobile robots and other computer interfacing methods.
Some philosophers believe that the details of our body configuration are also important if an artificial intelligence is to adequately share and understand the concepts humans use to describe our experience. (i.e., the idea that we might never understand what it is to be a bat, for example, just because its body configuration and sensors are too different from our own.) I have not tried to build a humanoid body for Asa H. As expensive as robotics is humanoid robotics is typically even more expensive. Even Blankenship's minimalistic Arlo (Arlo: The robot you've always wanted, CreateSpace, 2015) would cost at least $2000.00 without sensors and without the computer(s) that would hold Asa itself.
At other times I want Asa to form concepts that humans don't yet have (for example my blog of 10 Jan. 2016). If these two goals are mutually exclusive I may simply have to follow two different training paths for two different versions of Asa H. (The old question again of what the learning curriculum should be for an artificial intelligence.)
Some philosophers believe that the details of our body configuration are also important if an artificial intelligence is to adequately share and understand the concepts humans use to describe our experience. (i.e., the idea that we might never understand what it is to be a bat, for example, just because its body configuration and sensors are too different from our own.) I have not tried to build a humanoid body for Asa H. As expensive as robotics is humanoid robotics is typically even more expensive. Even Blankenship's minimalistic Arlo (Arlo: The robot you've always wanted, CreateSpace, 2015) would cost at least $2000.00 without sensors and without the computer(s) that would hold Asa itself.
At other times I want Asa to form concepts that humans don't yet have (for example my blog of 10 Jan. 2016). If these two goals are mutually exclusive I may simply have to follow two different training paths for two different versions of Asa H. (The old question again of what the learning curriculum should be for an artificial intelligence.)
Creativity
Concepts may be decomposed into a set of features. Features from various different concepts can then be mixed and matched to define new concepts. If the new concepts prove useful in describing the world they are retained in the pool of known patterns.
It's quite common to see a new concept formed from an old one by the simple addition of a single new feature:
horse concept + horn feature = unicorn concept
horse concept + wing feature = Pegasus concept
man concept + wing feature = angel concept
I see Asa H doing this as well.
It's quite common to see a new concept formed from an old one by the simple addition of a single new feature:
horse concept + horn feature = unicorn concept
horse concept + wing feature = Pegasus concept
man concept + wing feature = angel concept
I see Asa H doing this as well.
Wednesday, February 17, 2016
Causal reasonings in Asa H
Asa H is designed to learn and hierarchically decompose the spatiotemporal patterns that it experiences (R. Jones, Trans. Kansas Acad. Sci., 109, 3/4, pg 159, 2006).
On one level in the Asa H hierarchy (or, more realistically, over a small range of levels) a pattern may be formed involving the concept of force, the concept of mass, and connected /leading to the concept of acceleration.
On a higher level in the Asa H hierarchy a pattern may be formed involving the concept of thought, the concept of knowledge, and connected/leading to a concept of creativity.
Patterns will also form between/across levels in the hierarchy. The concept of knowledge, at a higher level, will be linked with the concept of memory (on a lower level). Various concepts on lower levels, like the concept of a sentence or an utterance or message, will activate the concept of thought on a higher level.
On one level in the Asa H hierarchy (or, more realistically, over a small range of levels) a pattern may be formed involving the concept of force, the concept of mass, and connected /leading to the concept of acceleration.
On a higher level in the Asa H hierarchy a pattern may be formed involving the concept of thought, the concept of knowledge, and connected/leading to a concept of creativity.
Patterns will also form between/across levels in the hierarchy. The concept of knowledge, at a higher level, will be linked with the concept of memory (on a lower level). Various concepts on lower levels, like the concept of a sentence or an utterance or message, will activate the concept of thought on a higher level.
Monday, February 15, 2016
eV3
I decided to buy a LEGO eV3 brick. They are faster than the NXT brick, have more memory, will accept a microSD card, and run Debian Linux. eV3 also has 4 output ports while NXT has only 3. But, of course, we have long been using multiplexers to expand both the number of input and output ports. EV3s are expensive enough that I do not plan to retire my NXTs. (But I will need to redesign at least some of my various robots.)
Saturday, February 13, 2016
Point charge student lab experiment
Forty years ago if you wanted to measure equipotentials in the first year physics lab you filled a tray with saltwater, powered the electrodes with an induction coil, and located the equipotentials using the sound you heard in a hand held earphone. I soon discovered that one could use a DC bench power supply in place of the AC induction coil and measure potentials directly with a digital voltmeter. You could even measure electric field with a pair of probe wires attached to a voltmeter. Rotating the pair until you got a maximum reading gave you the field direction. I used this setup for a few years until the modern conducting paper experiment became available. It was less messy and no dangerous saltwater around power supplies. But a point charge experiment in two dimensions does not give a potential that varies as 1/r or an electric field that varies as one over r squared.
A 3 dimensional point charge experiment is possible using a large jar filled with saltwater. A metal screen is placed along the wall of the jar to serve as cathode connected to a battery or power supply. An insulated wire with just a tiny conducting wire tip exposed is hung in the center of the jar and connected to the DC supply as the anode (I.e., the point charge). Another similar insulated wire with an exposed tip is then connected to a voltmeter and used as a probe that you move around inside the water. I use stiff "bell wire." A pair can be used to measure electric field. Now one can get the 3 dimensional 1/r scaling of voltage as a function of distance from the point charge.
A 3 dimensional point charge experiment is possible using a large jar filled with saltwater. A metal screen is placed along the wall of the jar to serve as cathode connected to a battery or power supply. An insulated wire with just a tiny conducting wire tip exposed is hung in the center of the jar and connected to the DC supply as the anode (I.e., the point charge). Another similar insulated wire with an exposed tip is then connected to a voltmeter and used as a probe that you move around inside the water. I use stiff "bell wire." A pair can be used to measure electric field. Now one can get the 3 dimensional 1/r scaling of voltage as a function of distance from the point charge.
Friday, February 12, 2016
Flow-through systems
There are people that believe that some sufficiently large but closed formal system, beginning with a theory of everything, (some string theory perhaps) could, through computer simulations and emergence, come to develop reasonable theories of physics, then chemistry, then biology, then social science, etc. etc. essentially forever. On the other hand I have noted that my own creativity machines seemed to operate more as information flow-through systems, requiring a steady input of new facts in order to generate new original output. (see Trans. Kansas Acad. Sci. 102, pg 32, 1999)
One of the behaviorist's models of the brain is the finite state machine which, at each time step, receives a new input stimulus, changes its state, and outputs some new response, sometimes a creative one. (See, for instance, page 98 of Behaviorism by John Staddon, Duckworth, 1993.)
My own scientific work seems to depend upon a steady input in the form of books, journal articles, experimental results, etc. in order to continuously revise my theoretical models (state) and produce original results like those reported here and in my other publications. (i.e., an open system)
One of the behaviorist's models of the brain is the finite state machine which, at each time step, receives a new input stimulus, changes its state, and outputs some new response, sometimes a creative one. (See, for instance, page 98 of Behaviorism by John Staddon, Duckworth, 1993.)
My own scientific work seems to depend upon a steady input in the form of books, journal articles, experimental results, etc. in order to continuously revise my theoretical models (state) and produce original results like those reported here and in my other publications. (i.e., an open system)
Wednesday, February 10, 2016
Life after death for quantum computers?
With an artificial intelligence program run on a quantum computer the processing (thinking) takes place either in computers in Everett's many worlds or else in the many dimensional Hilbert space. (see my blog of 13 Oct. 2015 and Bull. Am. Phys. Soc., March, 2016) If the computer in our world were destroyed while the program was running wouldn't the processing continue in other worlds? (But not forever.)
Tuesday, February 9, 2016
Psychometric AI
Ben Goertzel and others have criticized much of current mainstream work as being "narrow AI." (Artificial General Intelligence, Springer, 2007) Bringsjord and Schimanski have suggested we overcome this limitation using "one program for many tasks" as proposed in Newell's "20 Questions" paper (in Visual Information Processing, Academic Press, 1973). Where as "everyone is carrying out work on his or her own specific little part of human cognition" Newell proposed we "stay with the diverse collection of small experimental tasks, as now, but to construct a single system to perform them all." "it must be a single system in order to provide the integration we seek." This has been done to some degree by the connectionists. What a wide number of problems have been attempted using the standard backprop algorithm for example. I have been doing similar things with my Asa H.
Monday, February 8, 2016
Who won the space race?
There's a new BBC documentary titled: "Cosmonauts: How Russia Won the Space Race." Thought I'd present my own view/argument on this issue. Here are some important accomplishments:
First rocket into space: German V-2
Russian firsts: U.S. firsts:
First I.C.B.M., R-7 First satellite recovered from orbit, Discoverer
First earth satellite First orbital docking
First moon probes First men to orbit moon
First planetary probes First men to land on moon
First man in space, Vostok Robotic exploration of Mars
First soft landing on moon Robots probe outer planets
First animals around moon and back safely Robots out of solar system
Robotic exploration of Venus Sample return from comet
First space station Reusable spacecraft
First permanently manned space station
It looks like a tie to me! Both sides have much to be proud of. There is no need to belittle one another.
First rocket into space: German V-2
Russian firsts: U.S. firsts:
First I.C.B.M., R-7 First satellite recovered from orbit, Discoverer
First earth satellite First orbital docking
First moon probes First men to orbit moon
First planetary probes First men to land on moon
First man in space, Vostok Robotic exploration of Mars
First soft landing on moon Robots probe outer planets
First animals around moon and back safely Robots out of solar system
Robotic exploration of Venus Sample return from comet
First space station Reusable spacecraft
First permanently manned space station
It looks like a tie to me! Both sides have much to be proud of. There is no need to belittle one another.
Subscribe to:
Posts (Atom)