The main problem with death is that valuable things get lost.
Once people are digital, this problem tends to go away—since you can relatively easily scan their brains—and preserve anything of genuine value.
In summary, I don’t see why this issue would be much of a problem.
I was going to say something similar, myself. All you have to do is constrain the FAI so that it’s free to create any person-level models it wants, as long as it also reserves enough computational resources to preserve a copy so that the model citizen can later be re-instantiated in their virtual world, without any subjective feeling of discontinuity.
However, that still doesn’t obviate the question. Since the FAI has limited resources, it will still have to know, for which things it must reserve space for preserving, in order to know if the greater utility of the model justifies the additional resources it requires. Then again, it could just accelerate the model so that that person lives out a full, normal life in their simulated universe, so that they are irreversibly dead in their own world anyway.
@Tim_Tyler:
I was going to say something similar, myself. All you have to do is constrain the FAI so that it’s free to create any person-level models it wants, as long as it also reserves enough computational resources to preserve a copy so that the model citizen can later be re-instantiated in their virtual world, without any subjective feeling of discontinuity.
However, that still doesn’t obviate the question. Since the FAI has limited resources, it will still have to know, for which things it must reserve space for preserving, in order to know if the greater utility of the model justifies the additional resources it requires. Then again, it could just accelerate the model so that that person lives out a full, normal life in their simulated universe, so that they are irreversibly dead in their own world anyway.