If I’m running a simulation of a bunch of happy humans, it’s entirely possible for me to completely avoid your penalty term just by turning the simulation off and on again every so often to reset all of the penalty terms. And if that doesn’t count because they’re the same exact human, I can just make tiny modifications to each person that negate whatever procedure you’re using to uniquely identify individual humans. That seems like a really weird thing to morally mandate that people do, so I’m inclined to reject this theory.
Furthermore, I think the above case generalizes to imply that killing someone and then creating an entirely different person with equal happiness is morally positive under this framework, which goes against a lot of the things you say in the post. Specifically:
It avoids the problem with both totalism and averagism that killing a person and creating a different person with equal happiness is morally neutral.
It seems to do so in the opposite direction that I think you want it to.
It captures the intuition many people have that the bar for when it’s good to create a person is higher than the bar for when it’s good not to kill one.
I think this is just wrong, as like I said it incentives killing people and replacing them with other people to reset their penalty terms.
I do agree that whatever measure of happiness you use should include the extent to which somebody is bored, or tired of life, or whatnot. That being said, I’m personally of the opinion that killing someone and creating a new person with equal happiness is morally neutral. I think one of the strongest arguments in favor of that position is that turning a simulation off and then on again is the only case I can think of where you can do actually do that without any other consequences, and that seems quite morally neutral to me. Thus, personally, I continue to favor Solomonoff-measure-weighted total hedonic utilitarianism.
If I’m running a simulation of a bunch of happy humans, it’s entirely possible for me to completely avoid your penalty term just by turning the simulation off and on again every so often to reset all of the penalty terms.
No. First of all, if each new instance is considered a new person then the result of turning off and back on would be negative because of the −u0 term. Assuming u0≥h0τ0 (like I suggest in the text) means the loss from −u0 is always greater than the gain from avoiding the age-dependent penalty.
Second, like I said in the text, I’m talking about an approximate model, not the One True Formula of morality. This model has limited scope, and so far I haven’t included any treatment of personal identity shenanigans in it. However, now that you got me thinking about it, one way to extend it that seems attractive is:
Consider the −u0 term as associated with the death of a person. There can be partial death which gives a partial penalty if the person is not entirely lost. If the person is of age τ at the time of death, and ey have a surviving clone that split off when the person was of age τ1, then it only counts as τ−τ1τ of a death so the penalty is only −τ−τ1τu0. If the person dies but is resurrected in the future, then we can think of death as producing a −u0 penalty and resurrection as producing a +u0 reward. This is important if we have time discount and there is a large time difference. Imperfect resurrection will produce only a partial resurrection reward. You cannot fully resurrect the same person twice, but a good resurrection following a bad resurrection awards you the difference. No sequence of resurrections can sum to more than 1, and a finite sequence will sum to strictly less than 1 unless at least one of them is perfect. Having amnesia can be counted as dying with a living clone or as dying fully with a simultaneous partial resurrection, which amounts to the same.
Consider the age-dependent penalty as referring to the subjective age of a person. If you clone a person, the age counter of each continues from the same point. This is consistent with interpreting it as a relation between “true happiness” and “quality of life”.
I think that this extension avoids the repugnant conclusion as well as the original, but it would be nice to have a formal proof of this.
Ah, I see—I missed the −u0 term out in front, that makes more sense. In that case, my normal reaction would be that you’re penalizing simulation pausing, though if you use subjective age and gradually identify unique personhood, then I agree that you can get around that. Though that seems to me like a bit of a hack—I feel like the underlying thing that you really want there is variety of happy experience, so you should just be rewarding variety of experience directly rather than trying to do use some sort of continuous uniqueness measure.
I don’t understand why the underlying thing I want is “variety of happy experience” (only)? How does “variety of happy experience” imply killing a person and replacing em by a different person is bad? How does it solve the repugnant conclusion? How does it explain the asymmetry between killing and not-creating? If your answer is “it shouldn’t explain these things because they are wrong” then, sorry, I don’t think that’s what I really want. The utility function is not up for grabs.
Say you are in a position to run lots of simulations of people, and you want to allocate resources so as to maximize the utility generated. Of course, you will design your simulations so that h >> h0. Because all the simulations are very happy, u0 is now presumably smaller than hτ0 (perhaps much smaller). Your simulations quickly overcome the u0 penalty and start rapidly generating net utility, but the rate at which they generate it immediately begins to fade. Under your system it is optimal to terminate these happy people long before their lifespan reaches the natural lifespan τ, and reallocate the resources to new happy simulations.
The counterintuitive result occurs because this system assigns most of the marginal utility to occur early in a person’s life.
The penalty doesn’t reset when you create a new human. You are left with the negative value that the killed human left behind, and the new one starts off with a fresh amount of -u0[new person] to compensate for. If the original human would have been left alive, he would have compensated for his own, original -u0[original person], and the entire system would have produced a higher value.
To the contrary- turning the simulation on adds up all the -u_0 terms for all of the moral patients in the simulation, meaning that the first tick of the simulation is hugely negative.
I don’t share the opinion that numbers are moral patients within a context where they are visible as numbers, because I don’t think it’s supererogatory or required to run DFEDIT and set every dwarf’s happiness value to MAXINT, and in any context where “I” am visible to an entity running a simulation of me as a number or analogous concept, changing the value which corresponds with what “I” “call” “my happiness” within that simulation is not a thing that can be understood in any context that I have access to.
If I’m running a simulation of a bunch of happy humans, it’s entirely possible for me to completely avoid your penalty term just by turning the simulation off and on again every so often to reset all of the penalty terms. And if that doesn’t count because they’re the same exact human, I can just make tiny modifications to each person that negate whatever procedure you’re using to uniquely identify individual humans. That seems like a really weird thing to morally mandate that people do, so I’m inclined to reject this theory.
Furthermore, I think the above case generalizes to imply that killing someone and then creating an entirely different person with equal happiness is morally positive under this framework, which goes against a lot of the things you say in the post. Specifically:
It seems to do so in the opposite direction that I think you want it to.
I think this is just wrong, as like I said it incentives killing people and replacing them with other people to reset their penalty terms.
I do agree that whatever measure of happiness you use should include the extent to which somebody is bored, or tired of life, or whatnot. That being said, I’m personally of the opinion that killing someone and creating a new person with equal happiness is morally neutral. I think one of the strongest arguments in favor of that position is that turning a simulation off and then on again is the only case I can think of where you can do actually do that without any other consequences, and that seems quite morally neutral to me. Thus, personally, I continue to favor Solomonoff-measure-weighted total hedonic utilitarianism.
No. First of all, if each new instance is considered a new person then the result of turning off and back on would be negative because of the −u0 term. Assuming u0≥h0τ0 (like I suggest in the text) means the loss from −u0 is always greater than the gain from avoiding the age-dependent penalty.
Second, like I said in the text, I’m talking about an approximate model, not the One True Formula of morality. This model has limited scope, and so far I haven’t included any treatment of personal identity shenanigans in it. However, now that you got me thinking about it, one way to extend it that seems attractive is:
Consider the −u0 term as associated with the death of a person. There can be partial death which gives a partial penalty if the person is not entirely lost. If the person is of age τ at the time of death, and ey have a surviving clone that split off when the person was of age τ1, then it only counts as τ−τ1τ of a death so the penalty is only −τ−τ1τu0. If the person dies but is resurrected in the future, then we can think of death as producing a −u0 penalty and resurrection as producing a +u0 reward. This is important if we have time discount and there is a large time difference. Imperfect resurrection will produce only a partial resurrection reward. You cannot fully resurrect the same person twice, but a good resurrection following a bad resurrection awards you the difference. No sequence of resurrections can sum to more than 1, and a finite sequence will sum to strictly less than 1 unless at least one of them is perfect. Having amnesia can be counted as dying with a living clone or as dying fully with a simultaneous partial resurrection, which amounts to the same.
Consider the age-dependent penalty as referring to the subjective age of a person. If you clone a person, the age counter of each continues from the same point. This is consistent with interpreting it as a relation between “true happiness” and “quality of life”.
I think that this extension avoids the repugnant conclusion as well as the original, but it would be nice to have a formal proof of this.
Ah, I see—I missed the −u0 term out in front, that makes more sense. In that case, my normal reaction would be that you’re penalizing simulation pausing, though if you use subjective age and gradually identify unique personhood, then I agree that you can get around that. Though that seems to me like a bit of a hack—I feel like the underlying thing that you really want there is variety of happy experience, so you should just be rewarding variety of experience directly rather than trying to do use some sort of continuous uniqueness measure.
I don’t understand why the underlying thing I want is “variety of happy experience” (only)? How does “variety of happy experience” imply killing a person and replacing em by a different person is bad? How does it solve the repugnant conclusion? How does it explain the asymmetry between killing and not-creating? If your answer is “it shouldn’t explain these things because they are wrong” then, sorry, I don’t think that’s what I really want. The utility function is not up for grabs.
Say you are in a position to run lots of simulations of people, and you want to allocate resources so as to maximize the utility generated. Of course, you will design your simulations so that h >> h0. Because all the simulations are very happy, u0 is now presumably smaller than hτ0 (perhaps much smaller). Your simulations quickly overcome the u0 penalty and start rapidly generating net utility, but the rate at which they generate it immediately begins to fade. Under your system it is optimal to terminate these happy people long before their lifespan reaches the natural lifespan τ, and reallocate the resources to new happy simulations.
The counterintuitive result occurs because this system assigns most of the marginal utility to occur early in a person’s life.
No. It is sufficient that u0≥h0τ0 (notice it is h0 there, not h) for killing + re-creating to be net bad.
The penalty doesn’t reset when you create a new human. You are left with the negative value that the killed human left behind, and the new one starts off with a fresh amount of -u0[new person] to compensate for. If the original human would have been left alive, he would have compensated for his own, original -u0[original person], and the entire system would have produced a higher value.
To the contrary- turning the simulation on adds up all the -u_0 terms for all of the moral patients in the simulation, meaning that the first tick of the simulation is hugely negative.
I don’t share the opinion that numbers are moral patients within a context where they are visible as numbers, because I don’t think it’s supererogatory or required to run DFEDIT and set every dwarf’s happiness value to MAXINT, and in any context where “I” am visible to an entity running a simulation of me as a number or analogous concept, changing the value which corresponds with what “I” “call” “my happiness” within that simulation is not a thing that can be understood in any context that I have access to.