Yeah, my example was rather weak. I think humor and empathy are important in current human minds, but uploads could modify their minds much more powerfully and accurately than we can today. Also, uploads would exist in a very different environment from ours. I don’t think current human minds or values would be well-adapted to that environment.
More successful uploads would be those who modified themselves to make more copies or consume/takeover more resources. As they evolved, their values would drift and they would care less about the things we care about. Eventually, they’d be come unfriendly.
Basically, yes. If values are different enough between two species/minds/groups/whatever, then both see the other as resources that could be reorganized into more valuable structures.
To borrow an UFAI example: An upload might not hate you, but your atoms could be reorganized into computronium running thousands of upload copies/children.
“Friendly” values simply means our values (or very close to them—closer than the value spread among us). Preservation of preference means that the agency of far future will prefer (and do) the kinds of things that we would currently prefer to be done in the far future (on reflection, if we knew more, given the specific situation in the future, etc.). In other words, value drift is absence of reflective consistency, and Friendliness is reflective consistency in following our preference. Value drift results in the far future agency having preference very different from ours, and so not doing the things we’d prefer to be done. This turns the far future into the moral wasteland, from the point of view of our preference, little different from what would remain after unleashing a paperclip maximizer or exterminating all life and mind.
(Standard disclaimer: values/preference have little to do with apparent wants or likes.)
Yeah, my example was rather weak. I think humor and empathy are important in current human minds, but uploads could modify their minds much more powerfully and accurately than we can today. Also, uploads would exist in a very different environment from ours. I don’t think current human minds or values would be well-adapted to that environment.
More successful uploads would be those who modified themselves to make more copies or consume/takeover more resources. As they evolved, their values would drift and they would care less about the things we care about. Eventually, they’d be come unfriendly.
Why must value drift eventually make unfriendly values? Do you just define “friendly” values as close values?
Basically, yes. If values are different enough between two species/minds/groups/whatever, then both see the other as resources that could be reorganized into more valuable structures.
To borrow an UFAI example: An upload might not hate you, but your atoms could be reorganized into computronium running thousands of upload copies/children.
“Friendly” values simply means our values (or very close to them—closer than the value spread among us). Preservation of preference means that the agency of far future will prefer (and do) the kinds of things that we would currently prefer to be done in the far future (on reflection, if we knew more, given the specific situation in the future, etc.). In other words, value drift is absence of reflective consistency, and Friendliness is reflective consistency in following our preference. Value drift results in the far future agency having preference very different from ours, and so not doing the things we’d prefer to be done. This turns the far future into the moral wasteland, from the point of view of our preference, little different from what would remain after unleashing a paperclip maximizer or exterminating all life and mind.
(Standard disclaimer: values/preference have little to do with apparent wants or likes.)