Doesn’t CEV implicitly assert that there exists a set of moral assertions M that is more reliably moral than anything humans assert today, and that it’s possible for a sufficiently intelligent system to derive M?
The implicit assertion is “Greater or Equal”, not “Greater”.
Run on a True Conservative it will return the morals that the conservative currently has.
Mm. I’ll certainly agree that anyone for whom that’s true deserves the title “True Conservative.”
I don’t think I’ve ever met anyone who meets that description, though I’ve met people who would probably describe themselves that way.
Presumably, someone who believes this is true of themselves would consider the whole notion of extrapolating the target definition for a superhumanly powerful optimization process to be silly, though, and consider the label CEV to be technically accurate, in the same sense that I’m currently extrapolating the presence of my laptop, but to imply falsehoods.
The implicit assertion is “Greater or Equal”, not “Greater”.
Run on a True Conservative it will return the morals that the conservative currently has.
Mm.
I’ll certainly agree that anyone for whom that’s true deserves the title “True Conservative.”
I don’t think I’ve ever met anyone who meets that description, though I’ve met people who would probably describe themselves that way.
Presumably, someone who believes this is true of themselves would consider the whole notion of extrapolating the target definition for a superhumanly powerful optimization process to be silly, though, and consider the label CEV to be technically accurate, in the same sense that I’m currently extrapolating the presence of my laptop, but to imply falsehoods.