B: … “I’ll make an FAI that cares about every human equally, no matter what they do.” … Would you help me build that?
A: Well, it fits with my intuitive notion of morality, but it’s not clear what incentive I have to help.
At this stage, I think the dialog goes astray by missing the real practical and political reason for CEV. The correct question is “Would you actively oppose me?” The correct answer is, “Well, I don’t see how I could reasonably expect anything much better than that, so …, no, I suppose I won’t actively oppose you.” And the difficult problem is convincing a rather large fraction of mankind to give the correct answer.
The correct question is “Would you actively oppose me?” The correct answer is, “Well, I don’t see how I could reasonably expect anything much better than that, so …, no, I suppose I won’t actively oppose you.”
The rich and powerful won’t care for CEV. It pays no attention to their weath. They might as well have wasted their time accruing it.
Since the rich and powerful are high on the list for funding the R&D behind intelligent machines, they are likely to find a way to fund something that pays more attention to their preferences.
The “I don’t see how I could reasonably expect anything much better” seems likely to be a failure of the imagination.
The rich and powerful won’t care for CEV. It pays no attention to their weath.
Not necessarily so. Quoting Eliezer: “A minor, muddled preference of 60% of humanity might be countered by a strong, unmuddled preference of 10% of humanity.” So any good Marxist will be able to imagine the rich and powerful getting their way in the computation of CEV just as they get their way today: by inducing muddle in the masses.
The “I don’t see how I could reasonably expect anything much better” seems likely to be a failure of the imagination.
And here I was considering it a victory of reason. :)
Quoting Eliezer: “A minor, muddled preference of 60% of humanity might be countered by a strong, unmuddled preference of 10% of humanity.” So any good Marxist will be able to imagine the rich and powerful getting their way in the computation of CEV just as they get their way today: by inducing muddle in the masses.
There’s little reason for them to bother with such nonsense—if they are building and paying for the thing in the first place.
CEV may be a utilitarian’s wet dream—but it will most-likely look like a crapshoot to the millionaires who are actually likely to be building machine intelligence.
The “I don’t see how I could reasonably expect anything much better” seems likely to be a failure of the imagination.
And here I was considering it a victory of reason. :)
It seemed as though you were failing to forsee opposition to CEV-like schemes. There are implementation problems too—but even without those, such scenarios do not seem very likely to happen.
At this stage, I think the dialog goes astray by missing the real practical and political reason for CEV. The correct question is “Would you actively oppose me?” The correct answer is, “Well, I don’t see how I could reasonably expect anything much better than that, so …, no, I suppose I won’t actively oppose you.” And the difficult problem is convincing a rather large fraction of mankind to give the correct answer.
The rich and powerful won’t care for CEV. It pays no attention to their weath. They might as well have wasted their time accruing it.
Since the rich and powerful are high on the list for funding the R&D behind intelligent machines, they are likely to find a way to fund something that pays more attention to their preferences.
The “I don’t see how I could reasonably expect anything much better” seems likely to be a failure of the imagination.
Not necessarily so. Quoting Eliezer: “A minor, muddled preference of 60% of humanity might be countered by a strong, unmuddled preference of 10% of humanity.” So any good Marxist will be able to imagine the rich and powerful getting their way in the computation of CEV just as they get their way today: by inducing muddle in the masses.
And here I was considering it a victory of reason. :)
There’s little reason for them to bother with such nonsense—if they are building and paying for the thing in the first place.
CEV may be a utilitarian’s wet dream—but it will most-likely look like a crapshoot to the millionaires who are actually likely to be building machine intelligence.
It seemed as though you were failing to forsee opposition to CEV-like schemes. There are implementation problems too—but even without those, such scenarios do not seem very likely to happen.
Thanks, I agree. It’s good to see that this multiplayer game notion of morality leads to a new insight that I didn’t build into it.