If you believe in moral progress (and CEV seems to rely on that position), then there’s every reason to think that future-society would want to make changes to how we live, if future-society had the capacity to make that type of intervention.
In short, wouldn’t you change the past to prevent the occurrence of chattel slavery if you could? (If you don’t like that example, substitute preventing the October revolution or whatever example fits your preferences).
Punishment from the future is spooky enough. Imagine what an anti-Guns of the South would be like for the temporal locals. Not pleasant, that’s for sure.
Doesn’t CEV implicitly assert that there exists a set of moral assertions M that is more reliably moral than anything humans assert today, and that it’s possible for a sufficiently intelligent system to derive M?
That sure sounds like a belief in moral progress to me.
Granted, it doesn’t imply that humans left to their own devices will achieve moral progress. But the same is true of technological progress.
Doesn’t CEV implicitly assert that there exists a set of moral assertions M that is more reliably moral than anything humans assert today, and that it’s possible for a sufficiently intelligent system to derive M?
The implicit assertion is “Greater or Equal”, not “Greater”.
Run on a True Conservative it will return the morals that the conservative currently has.
Mm. I’ll certainly agree that anyone for whom that’s true deserves the title “True Conservative.”
I don’t think I’ve ever met anyone who meets that description, though I’ve met people who would probably describe themselves that way.
Presumably, someone who believes this is true of themselves would consider the whole notion of extrapolating the target definition for a superhumanly powerful optimization process to be silly, though, and consider the label CEV to be technically accurate, in the same sense that I’m currently extrapolating the presence of my laptop, but to imply falsehoods.
If you believe in moral progress (and CEV seems to rely on that position), then there’s every reason to think that future-society would want to make changes to how we live, if future-society had the capacity to make that type of intervention.
In short, wouldn’t you change the past to prevent the occurrence of chattel slavery if you could? (If you don’t like that example, substitute preventing the October revolution or whatever example fits your preferences).
It’s more agnostic on the issue. It works just as well for the ultimate conservative.
I wouldn’t torture innocent people to prevent it, no.
Punishment from the future is spooky enough. Imagine what an anti-Guns of the South would be like for the temporal locals. Not pleasant, that’s for sure.
It’s more agnostic on the issue. It works just as well for the ultimate conservative.
Doesn’t CEV implicitly assert that there exists a set of moral assertions M that is more reliably moral than anything humans assert today, and that it’s possible for a sufficiently intelligent system to derive M?
That sure sounds like a belief in moral progress to me.
Granted, it doesn’t imply that humans left to their own devices will achieve moral progress. But the same is true of technological progress.
The implicit assertion is “Greater or Equal”, not “Greater”.
Run on a True Conservative it will return the morals that the conservative currently has.
Mm.
I’ll certainly agree that anyone for whom that’s true deserves the title “True Conservative.”
I don’t think I’ve ever met anyone who meets that description, though I’ve met people who would probably describe themselves that way.
Presumably, someone who believes this is true of themselves would consider the whole notion of extrapolating the target definition for a superhumanly powerful optimization process to be silly, though, and consider the label CEV to be technically accurate, in the same sense that I’m currently extrapolating the presence of my laptop, but to imply falsehoods.