I used “invariant” here to mean “moral claim that will hold for all successor moralities”.
A vastly simplified example: at t=0, morality is completely undefined.
At t=1, people decide that death is bad, and lock this in indefinitely.
At t=2, people decide that pleasure is good, and lock that in indefinitely. Etc.
An agent operating in a society that develops morality like that, looking back, would want to have all the accidents that lead to current morality to be maintained, but looking forward may not particularly care about how the remaining free choices come out. CEV in that kind of environment can work just fine, and someone implementing it in that situation would want to target it specifically at people from their own time period.
I used “invariant” here to mean “moral claim that will hold for all successor moralities”.
A vastly simplified example: at t=0, morality is completely undefined. At t=1, people decide that death is bad, and lock this in indefinitely. At t=2, people decide that pleasure is good, and lock that in indefinitely. Etc.
An agent operating in a society that develops morality like that, looking back, would want to have all the accidents that lead to current morality to be maintained, but looking forward may not particularly care about how the remaining free choices come out. CEV in that kind of environment can work just fine, and someone implementing it in that situation would want to target it specifically at people from their own time period.