It doesn’t change your actual happiness, just the future one. If you are literally shot with a sniper rifle while walking in the street with no warning, there is no time in which you are saddened by your death. You just are, and then aren’t. What is lost is all the happiness that you would have otherwise experienced. Assume the guy is shot in the head, so there’s no bleeding out part.
I’m not sure where the −1000 number comes from. There is no point in which the shot down person feels 1000 less happiness than before. Saying “the act itself is worth −1000” is adding a rule to the model. A hard coded rule that killing someone is −1000. First of all, such a rule doesn’t exist in the total uti, and this model fixes it. Second of all, not all killings are equally bad, so you have to come up with a model for that now. Instead, in this model, when someone is killed the total moral utility of the population is reduced by an amount equal to, at least, minimal “life worth living happiness” for every year the killed man had left. That is pretty intuitive and solves things without hard coded rules.
Plus, nobody said “an absolutely identical copy”, the problem in total uti is that it follows it is ok to murder someone and replace him with someone of EQUAL HAPPINESS, not equal everything. The same heuristic won’t work (because it deals with identity issues like “how do we define who is captain kirk”). In this model, this problem doesn’t occur anymore.
So it IS okay to kill someone and replace them with an absolutely identical copy, as long as the deceased feels no pain and nobody notices?
Is it okay for someone to change their mind about what they were going to do, and produce equal happiness doing something else?
Is it okay to kill someone and replace them with an absolutely identical copy, where nobody notices including the deceased, if the new person changes their mind about what they were going to do and ends up producing equal happiness doing something else?
″ So it IS okay to kill someone and replace them with an absolutely identical copy, as long as the deceased feels no pain and nobody notices? ”
In total uti it is ok. This is counter-intuitive, so this model fixes it, and its no longer ok. Again, that’s the reason the penalty is there.
The absolute identical copy trick might be ok, and might not be ok, but this is besides the point. If a completely identical copy is defined as being the same person, then you didn’t replace anybody and the entire question is moot. If its not, then you killed someone, which is bad, and it ought to be reflected in the model (which it is, as of now).
There’s still the open question of “how bad?”. Personally, I share the intuition that such replacement is undesirable, but I’m far from clear on how I’d want it quantified.
The key situation here isn’t “kill and replace with person of equal happiness”, but rather “kill and replace with person with more happiness”.
DNT is saying there’s a threshold of “more happiness” above which it’s morally permissible to make this replacement, and below which it is not. That seems plausible, but I don’t have a clear intuition where I’d want to set that threshold.
It doesn’t change your actual happiness, just the future one. If you are literally shot with a sniper rifle while walking in the street with no warning, there is no time in which you are saddened by your death. You just are, and then aren’t. What is lost is all the happiness that you would have otherwise experienced. Assume the guy is shot in the head, so there’s no bleeding out part.
I’m not sure where the −1000 number comes from. There is no point in which the shot down person feels 1000 less happiness than before. Saying “the act itself is worth −1000” is adding a rule to the model. A hard coded rule that killing someone is −1000. First of all, such a rule doesn’t exist in the total uti, and this model fixes it. Second of all, not all killings are equally bad, so you have to come up with a model for that now. Instead, in this model, when someone is killed the total moral utility of the population is reduced by an amount equal to, at least, minimal “life worth living happiness” for every year the killed man had left. That is pretty intuitive and solves things without hard coded rules.
Plus, nobody said “an absolutely identical copy”, the problem in total uti is that it follows it is ok to murder someone and replace him with someone of EQUAL HAPPINESS, not equal everything. The same heuristic won’t work (because it deals with identity issues like “how do we define who is captain kirk”). In this model, this problem doesn’t occur anymore.
So it IS okay to kill someone and replace them with an absolutely identical copy, as long as the deceased feels no pain and nobody notices?
Is it okay for someone to change their mind about what they were going to do, and produce equal happiness doing something else?
Is it okay to kill someone and replace them with an absolutely identical copy, where nobody notices including the deceased, if the new person changes their mind about what they were going to do and ends up producing equal happiness doing something else?
″ So it IS okay to kill someone and replace them with an absolutely identical copy, as long as the deceased feels no pain and nobody notices? ”
In total uti it is ok. This is counter-intuitive, so this model fixes it, and its no longer ok. Again, that’s the reason the penalty is there.
The absolute identical copy trick might be ok, and might not be ok, but this is besides the point. If a completely identical copy is defined as being the same person, then you didn’t replace anybody and the entire question is moot. If its not, then you killed someone, which is bad, and it ought to be reflected in the model (which it is, as of now).
In order to penalize something that probably shouldn’t be explicitly punished, you’re requiring that identity be well-defined.
There’s still the open question of “how bad?”. Personally, I share the intuition that such replacement is undesirable, but I’m far from clear on how I’d want it quantified.
The key situation here isn’t “kill and replace with person of equal happiness”, but rather “kill and replace with person with more happiness”.
DNT is saying there’s a threshold of “more happiness” above which it’s morally permissible to make this replacement, and below which it is not. That seems plausible, but I don’t have a clear intuition where I’d want to set that threshold.