″ So it IS okay to kill someone and replace them with an absolutely identical copy, as long as the deceased feels no pain and nobody notices? ”
In total uti it is ok. This is counter-intuitive, so this model fixes it, and its no longer ok. Again, that’s the reason the penalty is there.
The absolute identical copy trick might be ok, and might not be ok, but this is besides the point. If a completely identical copy is defined as being the same person, then you didn’t replace anybody and the entire question is moot. If its not, then you killed someone, which is bad, and it ought to be reflected in the model (which it is, as of now).
There’s still the open question of “how bad?”. Personally, I share the intuition that such replacement is undesirable, but I’m far from clear on how I’d want it quantified.
The key situation here isn’t “kill and replace with person of equal happiness”, but rather “kill and replace with person with more happiness”.
DNT is saying there’s a threshold of “more happiness” above which it’s morally permissible to make this replacement, and below which it is not. That seems plausible, but I don’t have a clear intuition where I’d want to set that threshold.
″ So it IS okay to kill someone and replace them with an absolutely identical copy, as long as the deceased feels no pain and nobody notices? ”
In total uti it is ok. This is counter-intuitive, so this model fixes it, and its no longer ok. Again, that’s the reason the penalty is there.
The absolute identical copy trick might be ok, and might not be ok, but this is besides the point. If a completely identical copy is defined as being the same person, then you didn’t replace anybody and the entire question is moot. If its not, then you killed someone, which is bad, and it ought to be reflected in the model (which it is, as of now).
In order to penalize something that probably shouldn’t be explicitly punished, you’re requiring that identity be well-defined.
There’s still the open question of “how bad?”. Personally, I share the intuition that such replacement is undesirable, but I’m far from clear on how I’d want it quantified.
The key situation here isn’t “kill and replace with person of equal happiness”, but rather “kill and replace with person with more happiness”.
DNT is saying there’s a threshold of “more happiness” above which it’s morally permissible to make this replacement, and below which it is not. That seems plausible, but I don’t have a clear intuition where I’d want to set that threshold.