Another similar problem that I’ve encountered runs thus: suppose we’re in a scenario where it’s one person’s life against a million, or a billion, or all the people in the world. Suppose aliens are invading and will leave Earth be if we were to kill an arbitrarily-determined innocent bystander. Otherwise, they will choose an arbitrary person, take him to safety, and destroy Earth, along with everyone else. In that case, consensus seems to be that the lives of everyone on Earth far outweigh a healthy innocent’s rights.
The largest difference between the two cases is numbers: five people becomes six billion. If there is another difference, I have yet to find it. But if it is simply a difference in numbers, then whatever justification people use to choose the healthy man over five patients ought to apply here as well.
Within the thought experiment, the difference is simply numbers and people are giving the wrong answer, as long as you specify that this would increase the total number of years lived (many organ recipients are old and will die soon anyway). Outside the experiment in the realm of public policy, it is wrong to kill the “donor” in this one case because of the precedent it would set: people would be afraid to go to the hospital for fear of being killed for their organs. And if this was implemented by law, there would be civil unrest that would more than undo the good done.
It sounds like you’re saying that the thought experiment is unfixably wrong, since it can’t be made to match up with reality “outside the experiment”. If that’s the case, then I question whether people are “giving the wrong answer”. Morals are useful precisely for those cases where we often do not have enough facts to make a correct decision based only on what we know about a situation. For most people most of the time, doing the moral thing will pay off, and not doing the moral thing will ultimately not, even though it will quite often appear to for a short while after.
At a practical level, there’s another significant difference between the two cases: confidence in the probabilities.
As has been pointed out above, the thought experiment with the donors has a lot of utilitarian implications that are farther out than just the lives of the five people in the doctor’s room. Changing the behavior of doctors will change the behavior of others, since they will anticipate different things happening when they interact with doctors.
On the other hand, we haven’t got much basis for predicting how choosing one of the two scenarios will influence the aliens, or even thinking that they’ll come back.
Another similar problem that I’ve encountered runs thus: suppose we’re in a scenario where it’s one person’s life against a million, or a billion, or all the people in the world. Suppose aliens are invading and will leave Earth be if we were to kill an arbitrarily-determined innocent bystander. Otherwise, they will choose an arbitrary person, take him to safety, and destroy Earth, along with everyone else. In that case, consensus seems to be that the lives of everyone on Earth far outweigh a healthy innocent’s rights.
The largest difference between the two cases is numbers: five people becomes six billion. If there is another difference, I have yet to find it. But if it is simply a difference in numbers, then whatever justification people use to choose the healthy man over five patients ought to apply here as well.
Within the thought experiment, the difference is simply numbers and people are giving the wrong answer, as long as you specify that this would increase the total number of years lived (many organ recipients are old and will die soon anyway). Outside the experiment in the realm of public policy, it is wrong to kill the “donor” in this one case because of the precedent it would set: people would be afraid to go to the hospital for fear of being killed for their organs. And if this was implemented by law, there would be civil unrest that would more than undo the good done.
It sounds like you’re saying that the thought experiment is unfixably wrong, since it can’t be made to match up with reality “outside the experiment”. If that’s the case, then I question whether people are “giving the wrong answer”. Morals are useful precisely for those cases where we often do not have enough facts to make a correct decision based only on what we know about a situation. For most people most of the time, doing the moral thing will pay off, and not doing the moral thing will ultimately not, even though it will quite often appear to for a short while after.
At a practical level, there’s another significant difference between the two cases: confidence in the probabilities.
As has been pointed out above, the thought experiment with the donors has a lot of utilitarian implications that are farther out than just the lives of the five people in the doctor’s room. Changing the behavior of doctors will change the behavior of others, since they will anticipate different things happening when they interact with doctors.
On the other hand, we haven’t got much basis for predicting how choosing one of the two scenarios will influence the aliens, or even thinking that they’ll come back.