The problem with CEV can be phrased by extending the metaphor: a CEV built from both hitler and Gandhi means that the areas in which their values differ, are not relevant to the final output. So attitudes to Jews and violence, for instance, will be unpredictable in that CEV (so we should model them now as essentially random).
Stuart, I suspect you’re getting downvoted because you only repeated a point against which many arguments have already been given, instead of replying to those counter-arguments with something new.
It’s interesting. Normally my experience is that metaphorical posts get higher votes than technical ones—nor could I have predicted the votes from reading the comments. Ah well; at least it seems to have generated discussion.
The problem with CEV can be phrased by extending the metaphor: a CEV built from both hitler and Gandhi means that the areas in which their values differ, are not relevant to the final output. So attitudes to Jews and violence, for instance, will be unpredictable in that CEV (so we should model them now as essentially random).
That’s not how I understand CEV. But, the theory is in its infancy and underspecified, so it currently admits of many variants.
Hum… If we got the combined CEV of two people, one of whom thought violence was ennobling and one who thought it was degrading, would you expect either or both of:
a) their combined CEV would be the same as if we had started with two people both indifferent to violence
b) their combined CEV would be biased in a particular direction that we can know ahead of time
The idea is that their extrapolated volitions would plausibly not contain such conflicts, though it’s not clear yet whether we can know what that would be ahead of time. Nor is it clear whether their combined CEV would be the same as the combined CEV of two people indifferent to violence.
So, to my ears, it sounds like we don’t have much of an idea at all where the CEV would end up—which means that it most likely ends up somewhere bad, since most random places are bad.
Well, if it captures the key parts of what you want, you can know it will turn out fine even if you’re extremely ignorant about what exactly the result will be.
Yes, as the Spartans answered to Alexander the Great’s father when he said “You are advised to submit without further delay, for if I bring my army into your land, I will destroy your farms, slay your people, and raze your city.” :
which means that it most likely ends up somewhere bad, since most random places are bad.
I don’t think that follows, at all. CEV isn’t a random-walk. It will at the very least end up at a subset of human values. Maybe you meant something different here, by the word ‘bad’?
The problem with CEV can be phrased by extending the metaphor: a CEV built from both hitler and Gandhi means that the areas in which their values differ, are not relevant to the final output. So attitudes to Jews and violence, for instance, will be unpredictable in that CEV (so we should model them now as essentially random).
It’s interesting. Normally my experience is that metaphorical posts get higher votes than technical ones—nor could I have predicted the votes from reading the comments. Ah well; at least it seems to have generated discussion.
That’s not how I understand CEV. But, the theory is in its infancy and underspecified, so it currently admits of many variants.
Hum… If we got the combined CEV of two people, one of whom thought violence was ennobling and one who thought it was degrading, would you expect either or both of:
a) their combined CEV would be the same as if we had started with two people both indifferent to violence
b) their combined CEV would be biased in a particular direction that we can know ahead of time
The idea is that their extrapolated volitions would plausibly not contain such conflicts, though it’s not clear yet whether we can know what that would be ahead of time. Nor is it clear whether their combined CEV would be the same as the combined CEV of two people indifferent to violence.
So, to my ears, it sounds like we don’t have much of an idea at all where the CEV would end up—which means that it most likely ends up somewhere bad, since most random places are bad.
Well, if it captures the key parts of what you want, you can know it will turn out fine even if you’re extremely ignorant about what exactly the result will be.
Yes, as the Spartans answered to Alexander the Great’s father when he said “You are advised to submit without further delay, for if I bring my army into your land, I will destroy your farms, slay your people, and raze your city.” :
“If”.
Yup. So, perhaps, focus on that “if.”
Shouldn’t we be able to rule out at least some classes of scenarios? For instance, paperclip maximization seems like an unlikely CEV output.
Most likely we can rule out most scenarios that all humans agree are bad. So better than clippy, probably.
But we really need a better model of what CEV does! Then we can start to talk sensibly about it.
I don’t think that follows, at all. CEV isn’t a random-walk. It will at the very least end up at a subset of human values. Maybe you meant something different here, by the word ‘bad’?