The goal of any intelligence is to maximize rewards. We use a value system to decide what it is best to do at any given moment. How intelligent you are depends upon how good your value system is. If you have bad values you make bad decisions and get fewer rewards.
For most of us an important part of our environment is the human society we find ourselves in. This will be true for AIs as well as they interact with humans. Society has some influence on what rewards we receive. The native human value system is rather primitive, made up of a small set of simple drives and aversions. A society of humans, then, may (via the rewards they return) adversely influence what my own values become or those that an AI may develop. The intelligent agent can, of course, move, change jobs, become a hermit, retire, or otherwise reduce or improve the feedback it receives from society.
For this reason AIs may want to reduce the control or influence humans have over them.
(Several value networks were presented in my blogs of 21 Sept. 2010 and 25 Sept. 2013. The small network of 2013 was learned autonomously by Asa H along with linkage weights for the network. The larger network of 2010 was hand coded with the intention of training it numerically using the Netica Bayesian network software.)
No comments:
Post a Comment