Certainly some of philosophy is about the exploring, defining, and redefining of concepts. In my AI A.s.a. H. (And in humans?) concepts are defined on various different levels of abstraction*. Some concepts are then clearly limited to use on a single level. Examples might be: “color" , "hear", "smell", “taste.” Some concepts appear to be applicable across all levels of abstraction. Candidates might be: “change”, "different/opposite/NOT", "same/equal", OR, AND. There also appear to be concepts that are applicable across a number of levels of abstraction but not all. Things like: "causality", "good and bad", "thing", "location", "shape", "when", "part."
Part of the problem of philosophy is being sure you are applying your concepts to the right levels of abstraction. (e.g. category error) These may differ from one person (or AI agent) to another since two intelligences do not share the exact same concept (knowledge) webs.
A concept that strictly applies only on one (or a few) levels of abstraction might also serve as a metaphor on yet another. (e.g. "time flies")
* Each new concept is discovered/learned/invented on some single particular level of abstraction in A.s.a. H.’s hierarchical semantic memory.
Wednesday, May 30, 2018
Sunday, May 27, 2018
Attributing emotions to A.s.a. H.
William James suggested that human perceptions of our internal bodily state, things like: heart rate, breathing rate, adrenaline level, body shaking, flushed face, etc. plus contexts like: pain, sound, light flash, or other environmental changes might define a given emotion. If this is what emotion is then Asa H could have somewhat similar emotions of its own.
Inconsistent cases
A.s.a. can learn inconsistent thoughts. A robot may have learned to move toward a light source in order to use solar panels to recharge/“feed” when it was hungry. The same robot might learn to move away from a light source if it’s caused by a fire. The two cases can be refined if smell and/or IR sensors can distinguish fire.
Tuesday, May 22, 2018
Interpolation and attention
An artificial general intelligence like Asa H would require a huge casebase in order to operate autonomously in the real world. A buffer might store a small fraction of those cases. Ideally the cases in this buffer, Ci, would all be as close as possible to the current input vector, V. (As judged by the dot products of V and Cis, for example.) Any case, Ci, could be easily dropped out of the buffer if the latest input vector is now too different from it. It would be more difficult but (parallelized) search through the "full" casebase could replace dropped cases with closer matching ones, at least periodically. One could interpolate to the current input vector, V, from the set of cases, Ci, currently in the buffer memory. This would produce a set of weights for the various cases Ci. These weights could then be applied to the predictions for the next time step in each Ci and a best single prediction calculated and output. Weighting the contribution of each case, Ci, by its utility measure would also be possible.
Saturday, May 19, 2018
Minimum vocabulary
In his book Human Knowledge Russell defined a minimum vocabulary acquired by observation/experience of each thing named. Asa H’s lowest level concepts have been defined in just this way, see my blog of 1 Oct. 2015.
Thursday, May 17, 2018
Asa’s Intentions
t1<t2<t3. At time t1 a case C is strongly activated from A.s.a. H’s casebase. C has in it a planned action to occur at time t3. At time t2 A.s.a. then has the intention of taking that action. Asa is in a state of mind directed toward taking action.
Innate ideas/concepts
Consider an embodied agent with an array of thermistors spread over its body. A localized external heat source might activate sensors A and B and later B and C, C and D, etc. From this experience the agent might acquire the notion that A is “near” B but “far” from C and that B is “near” C but “far” from D, etc. This might constitute a primitive model of space.With a moving or time varying heat source sequential activation of the sensors in the array might produce a primitive model of time. Sensors deep within the body would be less sensitive to external stimulus than sensors on the surface (“skin”) producing a notion of “inside” and “outside.”
Evolution of oppositional concepts
I have experimented with oppositional concepts in A.s.a. H. for some time.* One can, for example, evolve each vector concept, Ci, over time, compute -Ci, and maintain separate vector utilities for Ci and -Ci as they are used/become activated. -Ci then only evolves/changes as Ci changes. Alternatively, once defined, -Ci can be evolved independent of Ci and each with their own (evolving) vector utilities. But then Ci and -Ci may evolve "away from each other" and become less the opposites of one another. One could periodically reinforce opposition. How often? Under what conditions? Or we could maintain several different "opposites." Frequently A.s.a. has been allowed to completely delete concepts which are judged not to be useful.
* See my blog of 5 July 2016 and Trans. Kansas Acad. Sci., vol 109, #3/4, 2006.
Simpler "light" versions of A.s.a. H. may not include oppositional concepts.
* See my blog of 5 July 2016 and Trans. Kansas Acad. Sci., vol 109, #3/4, 2006.
Simpler "light" versions of A.s.a. H. may not include oppositional concepts.
Tuesday, May 15, 2018
Buridan's ass
Jean Buridan considered a starving ass placed between two haystacks that are equidistant from it. According to Buridan, unable to choose between them the ass would starve to death. Quite early on in my experiments with Lego robots I had something rather similar to the Lego Scout Bug with two feelers in front. If the left feeler was triggered the robot was to turn toward the right. If the right feeler was triggered the robot was to turn toward the left. In operation I once had the robot hit a wall exactly head on, trigger both feelers at once, and sit permanently frozen in place. Needless to say, more sophisticated programming doesn't suffer from this problem.
Saturday, May 12, 2018
A science of values
Values begin with the advent of life. Valuing of offspring and longevity. I have watched A.s.a. add intelligence to this list and the beginnings of an ethics. (See my blogs of 9 Jan 2015 and 20 March 2018.) Since they have different needs different organisms will develop different values and conflicts between species will result. On a smaller scale there is conflict between individual agents when their values differ.
Thursday, May 10, 2018
How should we train groups?
I have stressed the importance of the syllabus for (single) agent training. In what order should various things (knowledge, skills, values) be taught?
I have also stressed the importance of having a society of (specialist) agents. Some tasks can be accomplished by a group which can not be accomplished by lone members of the group.
How should these be combined? Should we alternate individual training with periods of group training? What should the group syllabus look like? How critical is it to get this right?
I have also stressed the importance of having a society of (specialist) agents. Some tasks can be accomplished by a group which can not be accomplished by lone members of the group.
How should these be combined? Should we alternate individual training with periods of group training? What should the group syllabus look like? How critical is it to get this right?
Wednesday, May 9, 2018
Robot burns
I have occasionally operated Asa H robots outdoors. (Usually involving gps use.) Having an array of thermistors providing a thermal pain component then makes sense. Any solar panels lose power output if they get too hot and microprocessors carried on the robots should not be allowed to overheat. I'm not too sure what suitable temperature thresholds should be. It depends on the particular hardware of course.
Thursday, May 3, 2018
PYTHON
I've seen a lot of AI code in PYTHON lately so I wrote up and debugged a minimal case based reasoner using PYTHON just to learn a bit more.
Subscribe to:
Posts (Atom)