I think the standard LW argument for there being only one morality is based on the psychological unity of mankind.
I think you’re mixing up CEV with morality. CEV is an instance of the strategy “cooperate with humans” in some sort of AI-building prisoner’s dilemma. It gives the AI some preferences, and the only guarantee that those preferences will be good is that humans are similar.
There is “only one” “morality” (kinda) because when I say “this is right” I am executing a function, and functions are unique-ish. But Me.right can be different from You.right. You just happen to be wrong sometimes, because You.right isn’t right, because when I say right I mean Me.right.
So that “good” from the first paragraph would be Me.good, not CEV.good.
It is a factual statement that when I say something is “right,” I don’t mean CEV.right, I mean Me.right, and I’m not even particularly trying to approximate CEV.
I think you’re mixing up CEV with morality. CEV is an instance of the strategy “cooperate with humans” in some sort of AI-building prisoner’s dilemma. It gives the AI some preferences, and the only guarantee that those preferences will be good is that humans are similar.
There is “only one” “morality” (kinda) because when I say “this is right” I am executing a function, and functions are unique-ish. But Me.right can be different from You.right. You just happen to be wrong sometimes, because You.right isn’t right, because when I say right I mean Me.right.
So that “good” from the first paragraph would be Me.good, not CEV.good.
You don’t think morality should just be CEV?
It is a factual statement that when I say something is “right,” I don’t mean CEV.right, I mean Me.right, and I’m not even particularly trying to approximate CEV.
A “winning” CEV should result in people with wildly divergent moralities all being deliriously happy.