Friday, November 24, 2017

Words, meaning, truth

When my artificial intelligence A.s.a. H. senses its near something,  experiences a force on its exterior framework, and then senses acceleration it forms a concept. We can simultaneously present the word “collision” and A.s.a. will name this concept. As time goes on this concept and the meaning of the word “collision” may change/evolve. A.s.a. might experience contact with a whisker or bumper switch instead of or in addition to feeling an external force, for example. I believe human words and their meanings evolve in a similar way.

I have just read Michael Butler’s Deflationism and Semantic Theories of Truth (Pendlebury Press, 2017). Should we be interested in defining truth in formal (closed) systems or in natural languages (open systems)? How similar will the two concepts of truth be? How relevant will one be to the understanding of the other?

“Tarski himself describes the problem of defining truth for natural languages as meeting with ‘unsuperable difficulties.’ Among these difficulties he cites that natural language is not bounded, in the sense that new words can always be added to it...” “...portions of a natural language, such as those dealing with an empirical science might be sufficiently formalized to allow the application of Tarskian methods.” (Butler, pg 75) But don’t scientific terms/words also change/evolve over time? Are space and time what Newton thought they were? Is E/(c*c) the concept of mass that Newton had?

Natural languages have long resisted attempts to axiomatize them.

The concept of truth as consistency or coherence will change if one goes from one conceptualization of reality to another. (In scientific pluralism.) If truth is defined by valid deduction from true assumptions what is true will very as one’s set of assumptions varys. Truth based on the satisfaction of definitions changes as our definitions vary and again, even scientific definitions vary over time and even from one research program to another. Theories are always underdetermined by the evidence.
Pragmatic bases of truth depend upon what is useful to a particular culture in a particular environment.

Perhaps closed formal systems can never be in complete correspondence with open natural languages. Perhaps “laws” of nature, written in formal languages, can never be completely correct. i.e. we should think models rather than laws. (See, for example, Science Without Laws, Creager, Lunbeck, and Wise, eds., Duke Univ. Press, 2007.)

Tuesday, November 21, 2017

Advanced training for A.s.a. H.

Any artificially intelligent robot should be trained to feed itself, perhaps by finding a charging station and hooking up to it or perhaps by finding a brightly lit space and charging its solar batteries. A robot should also be made to discover those things that cause it pain and damage it, things like high speed collisions. Other common tasks are things like finding its way through a maze and stacking blocks. But what more complex environments, situations, and tasks should be attempted after these? Perhaps the 36 dramatic situations? (See, for example, The Thirty-six Dramatic Situations, Georges Polti, The Editor Co., 1917)

A.s.a. H. has already experienced several of the dramatic situations. The search for a recharge, whether innate or learned, would be an example of the situation  "Obtaining. Effort to obtain an object." Exploring far from the charging station might be an example of  "Daring enterprise. Adventurous expedition." The breakage of a mechanical arm while lifting would be an example of the situation "Disaster. A natural catastrophe." A multiagent system involving competition might be an example of  "Rivalry of kinsmen."

A multiagent system involving cooperation might entail "Self-sacrifice for kindred." or even "Life sacrificed to a cause." or "Deliverance. Rescue by friends."

Sunday, November 19, 2017

A.s.a.’s hierarchical memory

A.s.a.’s memory is a table of numbers. (See, for example, my blogs of 22 November 2010 and 10 February 2011.) The top three levels in the memory hierarchy were approximately:

.6 .6 .4 .4   .5 .5 .6 .4

.28 .28 .28 .28 .28 .28 .28 .28 .28 .28 .28 .28   .7 .7   .6 .6 .6   .7 .7  .6 .6 .6   1   .4 .4 .4 .4 .4  .5 .5 .5 .5   1

.7 .7   1   1   1   1   1   1   .7 .7   1   1   1   1   1   1

For one of A.s.a.’s  knowledgebases. Translating this to something like english is not easy.

Saturday, November 18, 2017

The attention problem

Although A.s.a. H. Has a large number of sensory channels as compared to most other robotic systems it still has very few inputs when compared to humans or other animals. This has probably helped with the problem of attention. But how will it scale as A.s.a. tackles more complex environments and problems?

Tuesday, November 14, 2017

Trying to know what someone else is thinking

I have tried to translate one of A.s.a. H.'s smaller concept maps into English:

sense far=(US>200), sense near=(US<75), move forward=(M1>0,M2>0), move backward=(M1<0,M2<0), turn right=(M1>0,M2<0), turn left=(M1<0,M2>0), move=(turn left), move=(turn right), move=(move backward), move=(move forward), walk=(walk forward), walk=(walk left), walk=(walk right), walk forward=(M5>0,M6>0), walk left=(M5=0, M6>0), walk right=(M5>0,M6=0), move=(walk), approach=(sense far, move forward, sense near), retreat=(sense near, move backward, sense far), decelerate=(acc<0), collision=(sense near, bump, decelerate), close hand=(M3>0), open hand=(M3<0), grasp=(proximity sense, close hand, hand force), release=(proximity sense, hand force, open hand), touch=(switch), force=(contact force), force=(hand force), force=(foot force1), force=(foot force2), force=(foot force3), force=(foot force4), push=(move, touch, contact force), arm up=(M4>0), arm down=(M4<0), force=(sense weight), lift=(arm up, sense weight), lower=(sense weight, arm down), carry=(grasp, lift, move), charge=(VB), damage=(collision, pain), health=(-damage, charge), sense=(sense far), sense=(sense near), sense=(distance), sense=(acc), sense=(touch), sense=(wind), sense=(smell), sense=(taste), sense=(temperature), sense=(hear), sense=(see), sense=(pressure), sense=(proximity sense), sense=(force), sense=(pain), sense=(charge), sense=(position), sense=(path), sense=(dust), sense=(radiation), sense=(magnetic field), sense=(direction), act=(grasp), act=(release), act=(move), act=(close hand), act=(open hand), act=(arm up), act=(arm down), act=(push), act=(lift), act=(lower), act=(carry), think=(sort file), think=(load file), think=(save file), think=(search memory), think=(case deduction), think=(case extrapolation), think=(simulation), self=(sense, think, act, health), taste=(PH, salinity), distance=(US), black=(CS~0), yellow=(CS~6), red=(CS~8.5), green(CS~4), blue=(CS~2.5), white=(CS~17), color=(black), color=(yellow), color=(red), color=(green), color=(blue), color=(white), see=(color), wind=(anemometer), temperature=(temp), hot=(temperature>80), cold=(temperature<50), hear=(sound), smell=(MQ2, MQ3, MQ4, MQ5, MQ6, MQ7, MQ8, MQ9, MQ135, O2, CO2, humidity), foot force1=(LFF), foot force2=(RFF), foot force3=(LRF), foot force4=(RRF), weight=(LFF, RFF, LRF, RRF), path=(line detection), direction=(compass), north=(compass~0), east=(compass~45), south=(compass~90), west=(compass~135), position=(lat, lon), pressure=(baro), dust=(ODS), magnetic field=(HS), radiation=(GMC)

I'm sure that this is imperfect and incomplete.

Wednesday, November 8, 2017

Concept mapping the mind of A.s.a. H.

In trying to render the meaning of A.s.a.'s hierarchical memory into English I have not always distinguished OR from AND. AND should perhaps be written as: Out1=(In1,In2) while OR would be: Out1=(In1), Out1=(In2).

After A.s.a. learned its first 100 concepts (most of the Toki Pona vocabulary) I tried to draw this up as a conventional concept map. It took me two 11" by 17" sheets of paper to complete not quite 1/3 of this.

Wednesday, November 1, 2017

A.s.a. H.'s robots, November 2017

The robots that symbol ground A.s.a. H.'s concepts are modified and changed out frequently, depending upon the experiments that are being run. Currently we are using two nearly identical mobile manipulators and a walker, all on tethers. Each mobile manipulator is a Lego eV3 pbrick mounted on wheels and with an attached robot arm. The 4 eV3 outputs command the 2 drive wheels, a servo motor that raises and lowers the arm, and a motor that opens and closes the gripper/hand. Each mobile manipulator carries 4 sensors connected to the eV3 inputs.  The walker's legs are mounted on a NXT pbrick which commands the 2 drive motors and which also carries 4 sensors connected as its inputs. A set of Vernier sensors are interfaced via LabQuests and can be carried and separately positioned using the two mobile manipulator robots.