The reason I said 3 or 4 is that it’s not clear to me to what extent Eliezer thinks there are facts about how one ought to translate non-preferences into preferences (in a sense that is relevant to everyone, not just humans). I don’t know if he has taken any position on this question.
with the caveat that if you restrict the domain of ‘everyone’ to humans, 2 would also be true.
Yes, assuming you mean to also restrict the domain of “most intelligent beings” to humans. However I think he would deny 2 as written.
You are of course correct about the intended domain-restriction.
I’d be surprised to hear an argument for how 4 was compatible with CEV or something like it, since lack of rigid general preference-creation would make convergence on a broad scale fairly implausible. And that conclusion does seem at odds with statements he’s made. But I do see your point.
The reason I said 3 or 4 is that it’s not clear to me to what extent Eliezer thinks there are facts about how one ought to translate non-preferences into preferences (in a sense that is relevant to everyone, not just humans). I don’t know if he has taken any position on this question.
Yes, assuming you mean to also restrict the domain of “most intelligent beings” to humans. However I think he would deny 2 as written.
You are of course correct about the intended domain-restriction.
I’d be surprised to hear an argument for how 4 was compatible with CEV or something like it, since lack of rigid general preference-creation would make convergence on a broad scale fairly implausible. And that conclusion does seem at odds with statements he’s made. But I do see your point.