Monday, April 30, 2018
Struggling to understand another mind
When I identify portions of the A.s.a. H. hierarchical concept web that do not neatly correspond to known and named human concepts am I seeing A.s.a. conceptualizing reality differently from humans or am I seeing a set of subsymbolic concepts? I.e., concepts humans may have too but which we do not name?
Friday, April 27, 2018
Contests
As a member of Phi Kappa Phi I was asked to judge undergraduate and graduate research projects. These were a mix of Math, Physics, Chemistry, Earth science, Biology, Nursing, Forensic science, and more. I was asked to identify and put in rank order my top 3 papers in 2 categories, graduate and undergraduate. Aside from my dislike of the use of a scalar value measure this is truly comparing apples with oranges. It set me to trying to think of something better.
I think one might identify a few desirable characteristics and go hunting for them. (The vector components of a vector value measure.) Is the work original? Is it well supported by experiment? Is it useful? ... Then, if you find a really original paper give an award for originality. If you don't find any really original work don't give that award that year. If you find a paper that contained lots of good quality measurements then give an award for that. ... The awards given out will likely vary from one year to the next. There'd be no best and second best.
I think one might identify a few desirable characteristics and go hunting for them. (The vector components of a vector value measure.) Is the work original? Is it well supported by experiment? Is it useful? ... Then, if you find a really original paper give an award for originality. If you don't find any really original work don't give that award that year. If you find a paper that contained lots of good quality measurements then give an award for that. ... The awards given out will likely vary from one year to the next. There'd be no best and second best.
Thursday, April 26, 2018
Beyond algorithms
Wikipedia says “an algorithm is an unambiguous specification of how to solve a class of problems.” Asa H, as a non-algorithmic system, does not know how to solve the problem it faces (achieving high vector utility). It discovers specifications for success by acting in and observing the world around it. Such specifications may change over time as the world (and Asa) change. An "algorithm" begins its life with the knowledge it needs, a non-algorithmic system like Asa H begins without such knowledge but slowly discovers it. In the beginning Asa only needs to know enough to get started in its search.
So what is the minimum Asa (or some other AI) needs to begin with so that it can get started learning? That depends upon the world it finds itself in. The idea of a curriculum for Asa has been an attempt to present a sequence of more and more difficult tasks and environments which help Asa to grow. (Much in the way we structure grade school lessons for human children. Protecting them from “the real world” for a while.)
See also my blog of 23 March 2015.
So what is the minimum Asa (or some other AI) needs to begin with so that it can get started learning? That depends upon the world it finds itself in. The idea of a curriculum for Asa has been an attempt to present a sequence of more and more difficult tasks and environments which help Asa to grow. (Much in the way we structure grade school lessons for human children. Protecting them from “the real world” for a while.)
See also my blog of 23 March 2015.
Wednesday, April 25, 2018
Computing beyond algorithms
Different people define algorithms differently. Aho et al say "...an algorithm, which is a finite sequence of instructions, each of which has a clear meaning and can be performed with a finite amount of effort in a finite length of time." (Data Structures and Algorithms, Addison-Wesley, 1983, page 2) Markov talks about an algorithm L written in an alphabet A (A consisting of a finite number of letters). L is composed of a finite number of rules of the form P->Q where P and Q are words, i.e., finite strings of letters from A. (Theory of Algorithms, Academy of Sciences USSR, 1954)
Various people have argued that expert systems and neural networks are non-algorithmic. (Rule-Based Expert Systems, Buchanan and Shortliffe, Addison-Wesley, 1985, page 3) I have argued that A.s.a. H. is non-algorithmic. (Trans. Kansas Acad. Sci., vol. 108, No. 3/4, 2005, page 169) Yet Asa and expert systems and neural networks are all written in conventional programming languages and run on standard computers so in what sense can they be non-algorithmic? They are certainly built out of algorithms themselves.
An algorithm accepts a set of inputs and maps them to a set of outputs, "answers" or "solutions." If this map (or set of maps) is built-in upfront at run time then your program is called "algorithmic." If your program observes the world and acquires the map(s) from the world then your program is called "non-algorithmic." Of course a "non-algorithmic" program must, itself, be able to map observations of the world into the algorithms/functions it learns/acquires. It maps ("metamaps") observations into maps. Furthermore, such non-algorithmic programs might completely change themselves, perhaps even change the hardware their built on top of. (For example, when Asa H is copied from one computer to another, changes the set of robots it is operating, or uses new tools that it has been provided with.)
Various people have argued that expert systems and neural networks are non-algorithmic. (Rule-Based Expert Systems, Buchanan and Shortliffe, Addison-Wesley, 1985, page 3) I have argued that A.s.a. H. is non-algorithmic. (Trans. Kansas Acad. Sci., vol. 108, No. 3/4, 2005, page 169) Yet Asa and expert systems and neural networks are all written in conventional programming languages and run on standard computers so in what sense can they be non-algorithmic? They are certainly built out of algorithms themselves.
An algorithm accepts a set of inputs and maps them to a set of outputs, "answers" or "solutions." If this map (or set of maps) is built-in upfront at run time then your program is called "algorithmic." If your program observes the world and acquires the map(s) from the world then your program is called "non-algorithmic." Of course a "non-algorithmic" program must, itself, be able to map observations of the world into the algorithms/functions it learns/acquires. It maps ("metamaps") observations into maps. Furthermore, such non-algorithmic programs might completely change themselves, perhaps even change the hardware their built on top of. (For example, when Asa H is copied from one computer to another, changes the set of robots it is operating, or uses new tools that it has been provided with.)
Monday, April 23, 2018
Multiple realities experienced by A.s.a. H.
It is possible to give A.s.a. H. various different sorts of memory, various different similarity measures, different value measures, different learning algorithms, etc. Different "cognitive styles" if you like. (See, for example, my blogs of 5 Sept. 2011, 10 July 2014, 19 Dec. 2014, 7 Jan. 2015, and 13 April 2016.) Similarly, Alfred Schutz believed that humans make use of multiple models of reality, building upon Goeth's "little worlds" or "pedagogical provinces," William James' "sub-universes," and Kierkegaard's "leaping between worlds." (See Schutz's On Multiple Realities, Philosophy and Phenomenological Research, Vol. 5, No. 4, June 1945, page 533.) Arguments for scientific pluralism again.
Friday, April 20, 2018
Vision
In developing A.s.a. H. I have not spent much time on vision capability, mostly because so may other groups have worked that topic. I decided to buy a Google AIY vision kit. Asa may be able to use it as a vision preprocessor.
Thursday, April 19, 2018
Lecture
Almost two weeks ago I attended a conference where Audra Keehn and Jason Emry from Washburn presented their: Comparing Lecture Style to Active Learning Styles in College Settings, a meta analysis of about 100 papers taken from the JSTOR, EBSCO, and ERIC databases. They report that "These results indicate that incorporating non-lecture teaching methods does not improve test scores."
Wednesday, April 18, 2018
Transcendence
In that limited portion of the multiverse accessible to us through our sense impressions we find various complex internal processes including life, intelligence, and consciousness. Quantum computing provides evidence that such patterning is present in the multiverse as a whole. Considering the vastness of the multiverse* it seems probable to me then that there very likely exist much more capable intelligent agents than us humans.
I hope that this is not just wishful thinking brought on by age and the threat of crazy Donald.
* See, for example, Wallace, The Emergent Multiverse, OUP, 2012, page 317.
I hope that this is not just wishful thinking brought on by age and the threat of crazy Donald.
* See, for example, Wallace, The Emergent Multiverse, OUP, 2012, page 317.
Sunday, April 15, 2018
AI Personhood
Saudi Arabia has granted a robot citizenship and Europe is considering personhood for AIs. In order to define personhood don’t we have to first define intelligence, consciousness, life, and sentience? I think it will be hard to get agreement on those definitions. (I think I am OK with Clark’s definition of sentience. See A Theory of Sentience, OUP, 2000) I wouldn’t want these quantities to be assessed using scalar measures. I also worry that our measures will end up excluding some humans. And would AIs be credited with free will?
Friday, April 13, 2018
AI and psychopathology
In clinical psychopathology a division of labor is often attributed to (human) multiples. As one way of helping deal with the curse of dimensionality I have created specialist Asa agents each of which may (alternately) occupy/control the same robot body. This specialization reduces the size of each case base and speeds up processing.
Thursday, April 12, 2018
Magical thinking
Asa’s thoughts really can bring about (some) effects in the world. After deliberation Asa can command a servo to move, grasp, lift, etc. As an action sequence (a case in Asa’s case base) is learned coincidental co ocurrences should average out (decay away to low values and be ignored). But if and when they do not Asa can be guilty of magical thinking. Asa currently believes that orange plastic sources are gamma ray sources since the only gamma sources Asa has seen have been orange. Asa does not presently have any deep theory of radioactivity that might make it question this corelation. Neither has Asa seen a wide selection of gamma sources. Human’s suffer from similar magical thinking.
Thursday, April 5, 2018
Some thoughts on AI immortality
When a copy of the Asa agent "Robby", described in my post of April 2, is loaded into a computer system with different specifications (faster, more memory, different sensor array, different effectors, etc.) it notices this change and slowly adapts to it.**(By changing the concepts in its knowledge/memory web.) The copy of "Robby" that remains behind in the old computer system only experiences any time loss required by the copy operation. This would become longer for more extensive Asa casebases. It is not that a single Robby consciousness* has been moved to a newer and better machine. Rather, the consciousness***, along with the rest of the software and casebase, was duplicated. The old copy of Robby, including its consciousness, will still die, as can the new copy. Just like the amoeba. See also my blog of 15 Oct. 2010.
You could force there to be only one Robby consciousness by upgrading the old system “one transistor at a time,” the consciousness will then just slowly adapt (change) with each “transistor changeout.” After the upgrade Robby’s consciousness will not be the same as before. You’ve not simply moved it into a new computer. The consciousness will be changed a lot if the hardware is changed a lot. The upgrade is an experience and experiences change you. Even tugs that are confined to the periphery of a knowledge web can change the web to its core. Even in our brief human lives at what point have we changed so much that we are no longer the same person?
Neither should we be equating “me-ness” with consciousness. The rest of the concept web and software is part of what makes “me” “me.” I believe that the unconscious parts of my mind do some of my best work.The hardware is also part of what makes “me” “me.”
To obtain AI immortality I suppose you could just replace “transistors” (and any other sufficiently small scale components) as they age and do no (or at least very gradual) upgrading. This might buy immortality at the price of obsolescence and you would still face the issue in my 15 Oct. 2010 blog. Forgetting is an important kind of learning. We shouldn't keep out of date ideas/patterns as the world changes.
Death just allows for larger scale more rapid change. Nature thought it was a good idea.
* Say the one in my blog of 21 July 2016.
** Experiments actually find that the system crashes if the changes are too extreme!
*** I'm going to use MY model of what consciousness is. See my blog of 19 Oct. 2016. Having developed a detailed theory of thought, mind, and consciousness (Asa H) makes this kind of philosophical work possible.
You could force there to be only one Robby consciousness by upgrading the old system “one transistor at a time,” the consciousness will then just slowly adapt (change) with each “transistor changeout.” After the upgrade Robby’s consciousness will not be the same as before. You’ve not simply moved it into a new computer. The consciousness will be changed a lot if the hardware is changed a lot. The upgrade is an experience and experiences change you. Even tugs that are confined to the periphery of a knowledge web can change the web to its core. Even in our brief human lives at what point have we changed so much that we are no longer the same person?
Neither should we be equating “me-ness” with consciousness. The rest of the concept web and software is part of what makes “me” “me.” I believe that the unconscious parts of my mind do some of my best work.The hardware is also part of what makes “me” “me.”
To obtain AI immortality I suppose you could just replace “transistors” (and any other sufficiently small scale components) as they age and do no (or at least very gradual) upgrading. This might buy immortality at the price of obsolescence and you would still face the issue in my 15 Oct. 2010 blog. Forgetting is an important kind of learning. We shouldn't keep out of date ideas/patterns as the world changes.
Death just allows for larger scale more rapid change. Nature thought it was a good idea.
* Say the one in my blog of 21 July 2016.
** Experiments actually find that the system crashes if the changes are too extreme!
*** I'm going to use MY model of what consciousness is. See my blog of 19 Oct. 2016. Having developed a detailed theory of thought, mind, and consciousness (Asa H) makes this kind of philosophical work possible.
Grasping
Suction is sometimes used to grip objects. If Asa were given such a system it would acquire a low level grasping concept that humans don't share. Conversely, humans who lick (wet) their fingers in order to pick up crumbs will form a grasping concept that Asa doesn't. The OWI-536 robot kit* from OWI Inc. suggests another interesting grasping mode/concept that humans will not have:
The tank tread "fingers" could move in order to draw in an object or push it away.
* This robot uses snap together assembly of major modular components and so might make limited use of Asa's pain subsystem but here we're just using it for inspiration.
The tank tread "fingers" could move in order to draw in an object or push it away.
* This robot uses snap together assembly of major modular components and so might make limited use of Asa's pain subsystem but here we're just using it for inspiration.
Wednesday, April 4, 2018
Exploring alternate realities
I believe that each of us experiences a somewhat different reality depending upon the concepts we know and believe in. (See my blog of 21 July 2016.) There are some concepts that a person may not have at all. Things like: entanglement, recurrence, value pluralism.... Other concepts you may have but not use/believe. Things like: spirits, multiverses, life after death.... Some time and effort should be spent identifying more of these crucial concepts, concepts that make one person’s reality significantly different from another person’s. Many of those that I have identified have been physics concepts. Could be a student research project. But would our colleagues accept it? The publication prospects for this kind of thing are very limited. Not really a good subject for a young researcher.
Tuesday, April 3, 2018
Value pluralism again
The various pieces of information (cases) that Asa H acquires/learns each have a vector utility associated with them. Gammack et al's The Book of Informatics (Cengage Learning, 2011) suggests that information should be assessed for quality along at least 13 dimensions: (pages 14 and 15)
1. new or surprising? 2. reliability 3. accuracy 4. relevance 5. timeliness 6. usability 7. completeness 8. simplicity 9. economical to produce 10. flexibility 11. verifiability 12. accessibility 13. secureness Some of these dimensions can not be assessed by Asa but we might be able to add others of them.
1. new or surprising? 2. reliability 3. accuracy 4. relevance 5. timeliness 6. usability 7. completeness 8. simplicity 9. economical to produce 10. flexibility 11. verifiability 12. accessibility 13. secureness Some of these dimensions can not be assessed by Asa but we might be able to add others of them.
Monday, April 2, 2018
Limits of human thought
To what extent are the "laws of physics" "in nature" and to what extent are they "in our heads"? In his doctrine of classical concepts Bohr believed that we had to describe reality in terms of classical concepts like space, and mass, and force. (See my blog of 27 April 2017.) Since we only have access to our sense impressions does this limit the concepts that we can create and use to build our models ("laws")? Asa H can have access to sense impressions we humans do not have, things like direct observation of electric and magnetic fields. With tools like field meters we give ourselves additional artificial senses. Both Asa and humans can also extrapolate, interpolate, abstract, etc. and so form concepts that have no direct counterpart in the world, things like unicorns and mathematical systems that do not correspond to any observed pattern present in the world. But do such mechanisms (interpolation, extrapolation, etc.) give us (or Asa) a complete or adequate set of fundamental concepts from which to build models that have any hope of describing ultimate reality, Kant's "thing in itself"? As with conventional neural networks it is difficult to translate the concepts (patterns) that Asa learns (creates) into human English phrases. (See my blog of 19 Nov 2017.) Perhaps Asa has learned some important concepts I am unaware of.
Nagarjuna argues that "To express anything in language is to express truth that depends on language and so this cannot be an expression of the way things are ultimately." (Beyond the Limits of Thought, Priest, OUP, 2002, page 260) But some languages express some ideas better than others do. (To express ideas in physics I prefer mathematics over English.) I am simply looking for BETTER languages* with which to describe reality. BETTER ontologies.*
* Plural because scientific pluralism may be needed. Multiple models not a single one.
Nagarjuna argues that "To express anything in language is to express truth that depends on language and so this cannot be an expression of the way things are ultimately." (Beyond the Limits of Thought, Priest, OUP, 2002, page 260) But some languages express some ideas better than others do. (To express ideas in physics I prefer mathematics over English.) I am simply looking for BETTER languages* with which to describe reality. BETTER ontologies.*
* Plural because scientific pluralism may be needed. Multiple models not a single one.
Me-ness
When an Asa H agent has been trained, given a name, ("Robby"), and copied there are then two "Robbys." This is no different than having two amoebas where there used to be one. Asa is just smarter. As time progresses the two Robbys will differentiate themselves from one another and no longer be identical.
Subscribe to:
Posts (Atom)