In blogs like those of 21 Sept. 2010 and 26 Oct. 2016 I have discussed value/moral systems for A.s.a. And other artificial intelligences. Jonathan Haidt's moral foundations theory (see his book The Righteous Mind) suggests that humans have a set of primary values:
1. Care versus harm
2. Liberty versus slavery
3. Fair versus cheat
4. Loyal versus betrayal
5. Authority versus submissive
6. Sanctity versus degrading
Haidt believes political liberals value the first 3 more while political conservatives value the last 3 more, thus explaining how some people can vote for someone like Trump. They experience somewhat different realities. (See my blog of 21 July 2016) This set of values overlaps only slightly with those found in my 21 Sept. 2010 blog. Perhaps I should try to implement more of them? Or should we be less concerned with what humans value (sticking to scientism)?
No comments:
Post a Comment