It’s not clear whether we have or need to have preferences over worlds with countably infinitely many symmetric individuals.
At a minimum, it’s also worth noting that anthropic reasoning is problematic in such worlds (I guess equivalently) and . A framework which answers questions of the form “what do you expect to happen?” could also probably be used to answer ethical questions.
(This kind of exercise seems potentially worthwhile anyway. I’m not so sure whether it is particularly relevant to AI alignment, for orthogonal reasons—hoping to figure out our values in advance seems to be giving up the game, as does making the kind of structural assumptions about value that could come back to bite us if we were wrong about infinite ethics.)
It’s not clear whether we have or need to have preferences over worlds with countably infinitely many symmetric individuals.
At a minimum, it’s also worth noting that anthropic reasoning is problematic in such worlds (I guess equivalently) and . A framework which answers questions of the form “what do you expect to happen?” could also probably be used to answer ethical questions.
(This kind of exercise seems potentially worthwhile anyway. I’m not so sure whether it is particularly relevant to AI alignment, for orthogonal reasons—hoping to figure out our values in advance seems to be giving up the game, as does making the kind of structural assumptions about value that could come back to bite us if we were wrong about infinite ethics.)