Whut? Where in the concept of the CEV is that idea implied? The whole idea is something like “humans seem to mean SOMETHING when they talk about this morality stuff. When we throw around words like ‘should’, that’s basically (well, more or less) a reference to the underlying algorithm we use to reason about morality. So just extract that part, feed into it more accurate information and more processing power, let it run, including modeling how it would update itself in light of new thoughts/etc, and go from there.”
Where in that is anything saying anything resembling the idea that any framework that could be asserted to be a moral framework actually is?
Whut? Where in the concept of the CEV is that idea implied? The whole idea is something like “humans seem to mean SOMETHING when they talk about this morality stuff. When we throw around words like ‘should’, that’s basically (well, more or less) a reference to the underlying algorithm we use to reason about morality. So just extract that part, feed into it more accurate information and more processing power, let it run, including modeling how it would update itself in light of new thoughts/etc, and go from there.”
Where in that is anything saying anything resembling the idea that any framework that could be asserted to be a moral framework actually is?