By definition, you can only care about your own preferences. That being said, it’s certainly possible for you to have a preference for other people’s preferences to be satisfied, in which case you would be (indirectly) caring about the preferences of others.
The question of whether humans all value the same thing is a controversial one. Most Friendly AI theorists believe, however, that the answer is “yes”, at least if you extrapolate their preferences far enough. For more details, take a look at Coherent Extrapolated Volition.
Okay, that makes sense, but does this mean you can’t say someone else did something wrong, unless he was acting inconsistently with his personal preferences?
Ah, okay, I’ve been reading most hyperlinks here, but that one looks pretty long, so I will come back to it after I finish Rationality (or maybe my question will even be answered later on in the book...)
That is definitely not the idea behind CEV, though it may reflect the idea that a sizable majority will mostly share the same values under extrapolation.
This is an impressive failure to respond to what I said, which again was that you asked for an explanation of false data. “Most Friendly AI theorists” do not appear to think that extrapolation will bring all human values into agreement, so I don’t know what “arguments” you refer to or even what you think they seek to establish. Certainly the link above has Eliezer assuming the opposite (at least for the purpose of safety-conscious engineering).
ETA: This is the link to the full sub-thread. Note my response to dxu.
By definition, you can only care about your own preferences. That being said, it’s certainly possible for you to have a preference for other people’s preferences to be satisfied, in which case you would be (indirectly) caring about the preferences of others.
The question of whether humans all value the same thing is a controversial one. Most Friendly AI theorists believe, however, that the answer is “yes”, at least if you extrapolate their preferences far enough. For more details, take a look at Coherent Extrapolated Volition.
Okay, that makes sense, but does this mean you can’t say someone else did something wrong, unless he was acting inconsistently with his personal preferences?
Ah, okay, I’ve been reading most hyperlinks here, but that one looks pretty long, so I will come back to it after I finish Rationality (or maybe my question will even be answered later on in the book...)
That is definitely not the idea behind CEV, though it may reflect the idea that a sizable majority will mostly share the same values under extrapolation.
Do they have any arguments for this besides wishful thinking?
I told him “they” assume no such thing—his own link is full of talk about how to deal with disagreements.
Yes, I’ve read most of the arguments, they strike me as highly speculative and hand-wavy.
This is an impressive failure to respond to what I said, which again was that you asked for an explanation of false data. “Most Friendly AI theorists” do not appear to think that extrapolation will bring all human values into agreement, so I don’t know what “arguments” you refer to or even what you think they seek to establish. Certainly the link above has Eliezer assuming the opposite (at least for the purpose of safety-conscious engineering).
ETA: This is the link to the full sub-thread. Note my response to dxu.