Is the intuition about killing someone and replacing them with someone who will experience equal total happiness assuming that killing someone directly causes a large drop in total happiness, but that the replacement only has total happiness equal to what the killed moral patient would have had without the killing?
Because my intuition is that if the first entity had expected future happiness of 100, but being killed changed that to −1000, their replacement, in order for them to result in ‘equal happiness’ must have expected future happiness of 1100, not 100. Intuitively, the more it sucks to be killed, the more benefit is required for it to be not wrong to kill someone.
Being killed doesn’t change your expected happiness, knowing you will be killed does. That’s different. If you want to separate variables properly think about someone being gunned down randomly with no earlier indication. Being killed just means ending you prematurely, and denying you the happiness you would have had were you alive. A good model will reflect why that’s bad even if you replace the killed person with someone that would compensate for future loss in happiness.
Pragmatically speaking, killing people causes unhappiness because it hurts the people who lost them, but that is reflected in the happiness values of those individuals, and a good model will reflect that killing someone is bad even if know one knows about it.
Being killed changes your actual happiness, compared to not being killed. I should not have used ‘expected happiness’ to refer to h|”not killed”.
I’m counting ‘the act of being gunned down’ as worth −1000 utility in itself, in addition to cancelling all happiness that would accumulate afterwards, and assuming that the replacement person would compensate all of the negative happiness that the killing caused.
Basically, I’m saying that I expect bleeding out after a gunshot wound to suck, a lot. The replacement compensating for loss in happiness starts from a hole the size of the killing.
I’m assuming that whatever heuristic you’re using survives the transporter paradox; killing Captain Kirk twice a day and replacing him with an absolutely identical copy (just in a different location) is not bad.
It doesn’t change your actual happiness, just the future one. If you are literally shot with a sniper rifle while walking in the street with no warning, there is no time in which you are saddened by your death. You just are, and then aren’t. What is lost is all the happiness that you would have otherwise experienced. Assume the guy is shot in the head, so there’s no bleeding out part.
I’m not sure where the −1000 number comes from. There is no point in which the shot down person feels 1000 less happiness than before. Saying “the act itself is worth −1000” is adding a rule to the model. A hard coded rule that killing someone is −1000. First of all, such a rule doesn’t exist in the total uti, and this model fixes it. Second of all, not all killings are equally bad, so you have to come up with a model for that now. Instead, in this model, when someone is killed the total moral utility of the population is reduced by an amount equal to, at least, minimal “life worth living happiness” for every year the killed man had left. That is pretty intuitive and solves things without hard coded rules.
Plus, nobody said “an absolutely identical copy”, the problem in total uti is that it follows it is ok to murder someone and replace him with someone of EQUAL HAPPINESS, not equal everything. The same heuristic won’t work (because it deals with identity issues like “how do we define who is captain kirk”). In this model, this problem doesn’t occur anymore.
So it IS okay to kill someone and replace them with an absolutely identical copy, as long as the deceased feels no pain and nobody notices?
Is it okay for someone to change their mind about what they were going to do, and produce equal happiness doing something else?
Is it okay to kill someone and replace them with an absolutely identical copy, where nobody notices including the deceased, if the new person changes their mind about what they were going to do and ends up producing equal happiness doing something else?
″ So it IS okay to kill someone and replace them with an absolutely identical copy, as long as the deceased feels no pain and nobody notices? ”
In total uti it is ok. This is counter-intuitive, so this model fixes it, and its no longer ok. Again, that’s the reason the penalty is there.
The absolute identical copy trick might be ok, and might not be ok, but this is besides the point. If a completely identical copy is defined as being the same person, then you didn’t replace anybody and the entire question is moot. If its not, then you killed someone, which is bad, and it ought to be reflected in the model (which it is, as of now).
There’s still the open question of “how bad?”. Personally, I share the intuition that such replacement is undesirable, but I’m far from clear on how I’d want it quantified.
The key situation here isn’t “kill and replace with person of equal happiness”, but rather “kill and replace with person with more happiness”.
DNT is saying there’s a threshold of “more happiness” above which it’s morally permissible to make this replacement, and below which it is not. That seems plausible, but I don’t have a clear intuition where I’d want to set that threshold.
Is the intuition about killing someone and replacing them with someone who will experience equal total happiness assuming that killing someone directly causes a large drop in total happiness, but that the replacement only has total happiness equal to what the killed moral patient would have had without the killing?
Because my intuition is that if the first entity had expected future happiness of 100, but being killed changed that to −1000, their replacement, in order for them to result in ‘equal happiness’ must have expected future happiness of 1100, not 100. Intuitively, the more it sucks to be killed, the more benefit is required for it to be not wrong to kill someone.
Being killed doesn’t change your expected happiness, knowing you will be killed does. That’s different. If you want to separate variables properly think about someone being gunned down randomly with no earlier indication. Being killed just means ending you prematurely, and denying you the happiness you would have had were you alive. A good model will reflect why that’s bad even if you replace the killed person with someone that would compensate for future loss in happiness.
Pragmatically speaking, killing people causes unhappiness because it hurts the people who lost them, but that is reflected in the happiness values of those individuals, and a good model will reflect that killing someone is bad even if know one knows about it.
Being killed changes your actual happiness, compared to not being killed. I should not have used ‘expected happiness’ to refer to h|”not killed”.
I’m counting ‘the act of being gunned down’ as worth −1000 utility in itself, in addition to cancelling all happiness that would accumulate afterwards, and assuming that the replacement person would compensate all of the negative happiness that the killing caused.
Basically, I’m saying that I expect bleeding out after a gunshot wound to suck, a lot. The replacement compensating for loss in happiness starts from a hole the size of the killing.
I’m assuming that whatever heuristic you’re using survives the transporter paradox; killing Captain Kirk twice a day and replacing him with an absolutely identical copy (just in a different location) is not bad.
It doesn’t change your actual happiness, just the future one. If you are literally shot with a sniper rifle while walking in the street with no warning, there is no time in which you are saddened by your death. You just are, and then aren’t. What is lost is all the happiness that you would have otherwise experienced. Assume the guy is shot in the head, so there’s no bleeding out part.
I’m not sure where the −1000 number comes from. There is no point in which the shot down person feels 1000 less happiness than before. Saying “the act itself is worth −1000” is adding a rule to the model. A hard coded rule that killing someone is −1000. First of all, such a rule doesn’t exist in the total uti, and this model fixes it. Second of all, not all killings are equally bad, so you have to come up with a model for that now. Instead, in this model, when someone is killed the total moral utility of the population is reduced by an amount equal to, at least, minimal “life worth living happiness” for every year the killed man had left. That is pretty intuitive and solves things without hard coded rules.
Plus, nobody said “an absolutely identical copy”, the problem in total uti is that it follows it is ok to murder someone and replace him with someone of EQUAL HAPPINESS, not equal everything. The same heuristic won’t work (because it deals with identity issues like “how do we define who is captain kirk”). In this model, this problem doesn’t occur anymore.
So it IS okay to kill someone and replace them with an absolutely identical copy, as long as the deceased feels no pain and nobody notices?
Is it okay for someone to change their mind about what they were going to do, and produce equal happiness doing something else?
Is it okay to kill someone and replace them with an absolutely identical copy, where nobody notices including the deceased, if the new person changes their mind about what they were going to do and ends up producing equal happiness doing something else?
″ So it IS okay to kill someone and replace them with an absolutely identical copy, as long as the deceased feels no pain and nobody notices? ”
In total uti it is ok. This is counter-intuitive, so this model fixes it, and its no longer ok. Again, that’s the reason the penalty is there.
The absolute identical copy trick might be ok, and might not be ok, but this is besides the point. If a completely identical copy is defined as being the same person, then you didn’t replace anybody and the entire question is moot. If its not, then you killed someone, which is bad, and it ought to be reflected in the model (which it is, as of now).
In order to penalize something that probably shouldn’t be explicitly punished, you’re requiring that identity be well-defined.
There’s still the open question of “how bad?”. Personally, I share the intuition that such replacement is undesirable, but I’m far from clear on how I’d want it quantified.
The key situation here isn’t “kill and replace with person of equal happiness”, but rather “kill and replace with person with more happiness”.
DNT is saying there’s a threshold of “more happiness” above which it’s morally permissible to make this replacement, and below which it is not. That seems plausible, but I don’t have a clear intuition where I’d want to set that threshold.