I, too, wonder if the “psychological unity of humankind” has been a bit overstated. All [insert brand and model here] computers have identical hardware, but you can install different software on them. We’re running different
Consider the case of a “something maximizer”. It’s given an object, and then maximizes the number of copies of that object.
You give one something maximizer a paperclip, and it becomes a paperclip maximizer. You give another a pencil, and that one becomes a pencil maximizer.
There’s no particular reason to expect the paperclip maximizer and the pencil maximizer to agree on values, even though they are both something maximizers, implemented on identical hardware. The extrapolated volition of a something-maximizer is not well-defined.
There are probably blank spaces in human Eliezer::morality that are written on by experience, and such writing may well be irrevocable, or mostly irrevocable. If humans’ terminal values are, in fact, contingent on experiences, then you could have two people disagreeing on the same level that the two something maximizers do.
As a practical matter, people generally do seem to have similar sets of terminal values, but different people rank the values differently. Consider the case of “honor”. Is it better to die with honor or to live on in disgrace? People raised in different cultures will give different answers to this question.
I, too, wonder if the “psychological unity of humankind” has been a bit overstated. All [insert brand and model here] computers have identical hardware, but you can install different software on them. We’re running different
Consider the case of a “something maximizer”. It’s given an object, and then maximizes the number of copies of that object.
You give one something maximizer a paperclip, and it becomes a paperclip maximizer. You give another a pencil, and that one becomes a pencil maximizer.
There’s no particular reason to expect the paperclip maximizer and the pencil maximizer to agree on values, even though they are both something maximizers, implemented on identical hardware. The extrapolated volition of a something-maximizer is not well-defined.
There are probably blank spaces in human Eliezer::morality that are written on by experience, and such writing may well be irrevocable, or mostly irrevocable. If humans’ terminal values are, in fact, contingent on experiences, then you could have two people disagreeing on the same level that the two something maximizers do.
As a practical matter, people generally do seem to have similar sets of terminal values, but different people rank the values differently. Consider the case of “honor”. Is it better to die with honor or to live on in disgrace? People raised in different cultures will give different answers to this question.