Certainly some of philosophy is about the exploring, defining, and redefining of concepts. In my AI A.s.a. H. (And in humans?) concepts are defined on various different levels of abstraction*. Some concepts are then clearly limited to use on a single level. Examples might be: “color" , "hear", "smell", “taste.” Some concepts appear to be applicable across all levels of abstraction. Candidates might be: “change”, "different/opposite/NOT", "same/equal", OR, AND. There also appear to be concepts that are applicable across a number of levels of abstraction but not all. Things like: "causality", "good and bad", "thing", "location", "shape", "when", "part."
Part of the problem of philosophy is being sure you are applying your concepts to the right levels of abstraction. (e.g. category error) These may differ from one person (or AI agent) to another since two intelligences do not share the exact same concept (knowledge) webs.
A concept that strictly applies only on one (or a few) levels of abstraction might also serve as a metaphor on yet another. (e.g. "time flies")
* Each new concept is discovered/learned/invented on some single particular level of abstraction in A.s.a. H.’s hierarchical semantic memory.
No comments:
Post a Comment