If I’m running a simulation of a bunch of happy humans, it’s entirely possible for me to completely avoid your penalty term just by turning the simulation off and on again every so often to reset all of the penalty terms.
No. First of all, if each new instance is considered a new person then the result of turning off and back on would be negative because of the −u0 term. Assuming u0≥h0τ0 (like I suggest in the text) means the loss from −u0 is always greater than the gain from avoiding the age-dependent penalty.
Second, like I said in the text, I’m talking about an approximate model, not the One True Formula of morality. This model has limited scope, and so far I haven’t included any treatment of personal identity shenanigans in it. However, now that you got me thinking about it, one way to extend it that seems attractive is:
Consider the −u0 term as associated with the death of a person. There can be partial death which gives a partial penalty if the person is not entirely lost. If the person is of age τ at the time of death, and ey have a surviving clone that split off when the person was of age τ1, then it only counts as τ−τ1τ of a death so the penalty is only −τ−τ1τu0. If the person dies but is resurrected in the future, then we can think of death as producing a −u0 penalty and resurrection as producing a +u0 reward. This is important if we have time discount and there is a large time difference. Imperfect resurrection will produce only a partial resurrection reward. You cannot fully resurrect the same person twice, but a good resurrection following a bad resurrection awards you the difference. No sequence of resurrections can sum to more than 1, and a finite sequence will sum to strictly less than 1 unless at least one of them is perfect. Having amnesia can be counted as dying with a living clone or as dying fully with a simultaneous partial resurrection, which amounts to the same.
Consider the age-dependent penalty as referring to the subjective age of a person. If you clone a person, the age counter of each continues from the same point. This is consistent with interpreting it as a relation between “true happiness” and “quality of life”.
I think that this extension avoids the repugnant conclusion as well as the original, but it would be nice to have a formal proof of this.
Ah, I see—I missed the −u0 term out in front, that makes more sense. In that case, my normal reaction would be that you’re penalizing simulation pausing, though if you use subjective age and gradually identify unique personhood, then I agree that you can get around that. Though that seems to me like a bit of a hack—I feel like the underlying thing that you really want there is variety of happy experience, so you should just be rewarding variety of experience directly rather than trying to do use some sort of continuous uniqueness measure.
I don’t understand why the underlying thing I want is “variety of happy experience” (only)? How does “variety of happy experience” imply killing a person and replacing em by a different person is bad? How does it solve the repugnant conclusion? How does it explain the asymmetry between killing and not-creating? If your answer is “it shouldn’t explain these things because they are wrong” then, sorry, I don’t think that’s what I really want. The utility function is not up for grabs.
Say you are in a position to run lots of simulations of people, and you want to allocate resources so as to maximize the utility generated. Of course, you will design your simulations so that h >> h0. Because all the simulations are very happy, u0 is now presumably smaller than hτ0 (perhaps much smaller). Your simulations quickly overcome the u0 penalty and start rapidly generating net utility, but the rate at which they generate it immediately begins to fade. Under your system it is optimal to terminate these happy people long before their lifespan reaches the natural lifespan τ, and reallocate the resources to new happy simulations.
The counterintuitive result occurs because this system assigns most of the marginal utility to occur early in a person’s life.
No. First of all, if each new instance is considered a new person then the result of turning off and back on would be negative because of the −u0 term. Assuming u0≥h0τ0 (like I suggest in the text) means the loss from −u0 is always greater than the gain from avoiding the age-dependent penalty.
Second, like I said in the text, I’m talking about an approximate model, not the One True Formula of morality. This model has limited scope, and so far I haven’t included any treatment of personal identity shenanigans in it. However, now that you got me thinking about it, one way to extend it that seems attractive is:
Consider the −u0 term as associated with the death of a person. There can be partial death which gives a partial penalty if the person is not entirely lost. If the person is of age τ at the time of death, and ey have a surviving clone that split off when the person was of age τ1, then it only counts as τ−τ1τ of a death so the penalty is only −τ−τ1τu0. If the person dies but is resurrected in the future, then we can think of death as producing a −u0 penalty and resurrection as producing a +u0 reward. This is important if we have time discount and there is a large time difference. Imperfect resurrection will produce only a partial resurrection reward. You cannot fully resurrect the same person twice, but a good resurrection following a bad resurrection awards you the difference. No sequence of resurrections can sum to more than 1, and a finite sequence will sum to strictly less than 1 unless at least one of them is perfect. Having amnesia can be counted as dying with a living clone or as dying fully with a simultaneous partial resurrection, which amounts to the same.
Consider the age-dependent penalty as referring to the subjective age of a person. If you clone a person, the age counter of each continues from the same point. This is consistent with interpreting it as a relation between “true happiness” and “quality of life”.
I think that this extension avoids the repugnant conclusion as well as the original, but it would be nice to have a formal proof of this.
Ah, I see—I missed the −u0 term out in front, that makes more sense. In that case, my normal reaction would be that you’re penalizing simulation pausing, though if you use subjective age and gradually identify unique personhood, then I agree that you can get around that. Though that seems to me like a bit of a hack—I feel like the underlying thing that you really want there is variety of happy experience, so you should just be rewarding variety of experience directly rather than trying to do use some sort of continuous uniqueness measure.
I don’t understand why the underlying thing I want is “variety of happy experience” (only)? How does “variety of happy experience” imply killing a person and replacing em by a different person is bad? How does it solve the repugnant conclusion? How does it explain the asymmetry between killing and not-creating? If your answer is “it shouldn’t explain these things because they are wrong” then, sorry, I don’t think that’s what I really want. The utility function is not up for grabs.
Say you are in a position to run lots of simulations of people, and you want to allocate resources so as to maximize the utility generated. Of course, you will design your simulations so that h >> h0. Because all the simulations are very happy, u0 is now presumably smaller than hτ0 (perhaps much smaller). Your simulations quickly overcome the u0 penalty and start rapidly generating net utility, but the rate at which they generate it immediately begins to fade. Under your system it is optimal to terminate these happy people long before their lifespan reaches the natural lifespan τ, and reallocate the resources to new happy simulations.
The counterintuitive result occurs because this system assigns most of the marginal utility to occur early in a person’s life.
No. It is sufficient that u0≥h0τ0 (notice it is h0 there, not h) for killing + re-creating to be net bad.