Um… I wouldn’t say CEV is based on a rejection of the idea of an objective moral standard, since Eliezer himself doesn’t exactly reject that:
Yes, I really truly do believe that humanity is better than the Pebblesorters! I am not being sarcastic, I really do believe that. I am not playing games by redefining “good” or “arbitrary”, I think I mean the same thing by those terms as everyone else. When you understand that I am genuinely sincere about that, you will understand my metaethics. I really don’t consider myself a moral relativist—not even in the slightest!
I do say CEV is based on a rejection of an objective moral standard, because the inference from CEV to moral relativism is clearer to me than those things Eliezer has said about not being a moral relativist, and because whether CEV implies moral relativism is a better-defined question than whether Eliezer is a moral relativist. (And whether someone is a moral relativist is determined by what they do, and by the actions they advocate, not by what they say when asked “Are you a moral relativist?”)
You can’t let what someone states directly trump the implications of the other things they say. Otherwise we could not refute theists when they state principles that imply some conclusion X, then deny that they imply X. (Example: “But if the reason to believe in God is that complex things must be designed by yet-more-complex things, then God must be even more complex.” “No, God is perfect simplicity.”)
Anyway, one thing I do know about Eliezer is that he doesn’t like it when I assert things about him. He may believe he is not a relativist, and act in accordance with that at times; and that may be more relevant to the outcome of Harry Potter: MoR than things I infer from CEV.
Except that, near as I can tell, CEV is NOT itself in any way based on relativism.
The idea basically amounts to “figure out what criteria people effectively actually mean by ‘morality’, or more generally, what it is they actually would have wanted if they knew more, spend more time considering moral issues, etc...”
If you believed in objective morality, you would try to figure out what good morals are, rather than take the position (as in CEV) that every moral framework is equally valid from within that moral framework, and therefore you may treat them simply as goals, and all you can do is try to fulfil your goals; and that it makes no sense to wonder about whether you should have different goals/values/morals.
Whut? Where in the concept of the CEV is that idea implied? The whole idea is something like “humans seem to mean SOMETHING when they talk about this morality stuff. When we throw around words like ‘should’, that’s basically (well, more or less) a reference to the underlying algorithm we use to reason about morality. So just extract that part, feed into it more accurate information and more processing power, let it run, including modeling how it would update itself in light of new thoughts/etc, and go from there.”
Where in that is anything saying anything resembling the idea that any framework that could be asserted to be a moral framework actually is?
Um… I wouldn’t say CEV is based on a rejection of the idea of an objective moral standard, since Eliezer himself doesn’t exactly reject that:
From The Bedrock of Morality: Arbitrary?.
I do say CEV is based on a rejection of an objective moral standard, because the inference from CEV to moral relativism is clearer to me than those things Eliezer has said about not being a moral relativist, and because whether CEV implies moral relativism is a better-defined question than whether Eliezer is a moral relativist. (And whether someone is a moral relativist is determined by what they do, and by the actions they advocate, not by what they say when asked “Are you a moral relativist?”)
You can’t let what someone states directly trump the implications of the other things they say. Otherwise we could not refute theists when they state principles that imply some conclusion X, then deny that they imply X. (Example: “But if the reason to believe in God is that complex things must be designed by yet-more-complex things, then God must be even more complex.” “No, God is perfect simplicity.”)
Anyway, one thing I do know about Eliezer is that he doesn’t like it when I assert things about him. He may believe he is not a relativist, and act in accordance with that at times; and that may be more relevant to the outcome of Harry Potter: MoR than things I infer from CEV.
Except that, near as I can tell, CEV is NOT itself in any way based on relativism.
The idea basically amounts to “figure out what criteria people effectively actually mean by ‘morality’, or more generally, what it is they actually would have wanted if they knew more, spend more time considering moral issues, etc...”
If you believed in objective morality, you would try to figure out what good morals are, rather than take the position (as in CEV) that every moral framework is equally valid from within that moral framework, and therefore you may treat them simply as goals, and all you can do is try to fulfil your goals; and that it makes no sense to wonder about whether you should have different goals/values/morals.
Whut? Where in the concept of the CEV is that idea implied? The whole idea is something like “humans seem to mean SOMETHING when they talk about this morality stuff. When we throw around words like ‘should’, that’s basically (well, more or less) a reference to the underlying algorithm we use to reason about morality. So just extract that part, feed into it more accurate information and more processing power, let it run, including modeling how it would update itself in light of new thoughts/etc, and go from there.”
Where in that is anything saying anything resembling the idea that any framework that could be asserted to be a moral framework actually is?