There’s still the open question of “how bad?”. Personally, I share the intuition that such replacement is undesirable, but I’m far from clear on how I’d want it quantified.
The key situation here isn’t “kill and replace with person of equal happiness”, but rather “kill and replace with person with more happiness”.
DNT is saying there’s a threshold of “more happiness” above which it’s morally permissible to make this replacement, and below which it is not. That seems plausible, but I don’t have a clear intuition where I’d want to set that threshold.
There’s still the open question of “how bad?”. Personally, I share the intuition that such replacement is undesirable, but I’m far from clear on how I’d want it quantified.
The key situation here isn’t “kill and replace with person of equal happiness”, but rather “kill and replace with person with more happiness”.
DNT is saying there’s a threshold of “more happiness” above which it’s morally permissible to make this replacement, and below which it is not. That seems plausible, but I don’t have a clear intuition where I’d want to set that threshold.