The transplant dilemma is framed in a way that personalizes the healthy young traveler while keeping the other five patients anonymous. This activates the part of our brains that treats people as individuals rather than numbers. There’s nothing wrong with the math. We object to the unfairness to the only individual in the story.
This dilemma is usually paired with an example of triage. Here an emergency-room doctor has to choose between saving one severely injured patient or five moderately injured patients. Five lives or one, the numbers are the same, but as long as all six patients are anonymous, it remains a numeric problem, and no one has a problem with the math.
Interesting; your phrasing (one severely injured vs 5 moderately injured) seems to pass on both deontological and utilitarian grounds for me, but if it was saving 5 severely injured people by letting one moderately injured person expire of their injuries, it feels like the trolley problem again.
Maybe it is less “unfairness to individuals” and more “unfairness to people better off than the subjects”.
I put in the different degrees of injury to set the context for the doctor’s choice… maybe it takes 5 times as long to save the severely injured person. I didn’t mean to imply that the severity of the injury affects the moral calculation.
You’re right, this is like the trolley problem. When all 6 people are anonymous, we do the calculation and kill 1 to save 5. When the trolley problem is framed as “push the fat man off the bridge”, that’s enough personalization to trigger the other part of the brain.
Moral philosophy in general tries to find universal principles whose logical consequences agree with our moral intuition. The OP is saying that we can fix consequentialism by making the moral calculations more complicated. Good luck with that! If moral intuition comes from two different parts of the brain that don’t agree with each other, then we can always construct moral dilemmas by framing situations so that they activate one part or another of our brains.
“philosophy tries… to agree with our …intuition...”? Bravo! See, I think that’s crazy. Or if it’s right, it means we’re stipulating the intuition in the first place. Surely that’s wrong? Or at least, we can look back in time to see “obvious” moral postulates we no longer agree with. In science we come up with a theory and then test it in the wind tunnel or something. In philosophy, is our reference standard kilogram just an intuition? That’s unsatisfying!
I had fun with friends recently considering the trolley problem from a perspective of INaction. When it was an act of volition, even (say) just a warning shout, they (we) felt less compelled to let the fat man live. (He was already on the track and would have to be warned off, get it?) It seems we are responsible for what we do, not so much for what we elect NOT to do. Since the consequences are the same, it seems wrong that there is a perceptive difference. This highlights, I suppose the author’s presumed contention (consequentialism generally) that the correct ethical choice is obviously one of carefully (perhaps expensively!) calculated long term outcomes and equal to what feels right only coincidentally. I think in the limit, we would (consequentialists all) just walk into the hospital and ask for vivisection, since we’d save 5 lives. The reason I don’t isn’t JUST altruism, because I wouldn’t ask you to either, instead it’s a step closer to Kant’s absolutism: as humans we’re worth something more than ants (who I submit are all consequentialists?) and have individual value. I need to work on expressing this better...
The transplant dilemma is framed in a way that personalizes the healthy young traveler while keeping the other five patients anonymous. This activates the part of our brains that treats people as individuals rather than numbers. There’s nothing wrong with the math. We object to the unfairness to the only individual in the story.
This dilemma is usually paired with an example of triage. Here an emergency-room doctor has to choose between saving one severely injured patient or five moderately injured patients. Five lives or one, the numbers are the same, but as long as all six patients are anonymous, it remains a numeric problem, and no one has a problem with the math.
Interesting; your phrasing (one severely injured vs 5 moderately injured) seems to pass on both deontological and utilitarian grounds for me, but if it was saving 5 severely injured people by letting one moderately injured person expire of their injuries, it feels like the trolley problem again.
Maybe it is less “unfairness to individuals” and more “unfairness to people better off than the subjects”.
I put in the different degrees of injury to set the context for the doctor’s choice… maybe it takes 5 times as long to save the severely injured person. I didn’t mean to imply that the severity of the injury affects the moral calculation.
You’re right, this is like the trolley problem. When all 6 people are anonymous, we do the calculation and kill 1 to save 5. When the trolley problem is framed as “push the fat man off the bridge”, that’s enough personalization to trigger the other part of the brain.
Moral philosophy in general tries to find universal principles whose logical consequences agree with our moral intuition. The OP is saying that we can fix consequentialism by making the moral calculations more complicated. Good luck with that! If moral intuition comes from two different parts of the brain that don’t agree with each other, then we can always construct moral dilemmas by framing situations so that they activate one part or another of our brains.
“philosophy tries… to agree with our …intuition...”? Bravo! See, I think that’s crazy. Or if it’s right, it means we’re stipulating the intuition in the first place. Surely that’s wrong? Or at least, we can look back in time to see “obvious” moral postulates we no longer agree with. In science we come up with a theory and then test it in the wind tunnel or something. In philosophy, is our reference standard kilogram just an intuition? That’s unsatisfying!
I had fun with friends recently considering the trolley problem from a perspective of INaction. When it was an act of volition, even (say) just a warning shout, they (we) felt less compelled to let the fat man live. (He was already on the track and would have to be warned off, get it?) It seems we are responsible for what we do, not so much for what we elect NOT to do. Since the consequences are the same, it seems wrong that there is a perceptive difference. This highlights, I suppose the author’s presumed contention (consequentialism generally) that the correct ethical choice is obviously one of carefully (perhaps expensively!) calculated long term outcomes and equal to what feels right only coincidentally. I think in the limit, we would (consequentialists all) just walk into the hospital and ask for vivisection, since we’d save 5 lives. The reason I don’t isn’t JUST altruism, because I wouldn’t ask you to either, instead it’s a step closer to Kant’s absolutism: as humans we’re worth something more than ants (who I submit are all consequentialists?) and have individual value. I need to work on expressing this better...