I put in the different degrees of injury to set the context for the doctor’s choice… maybe it takes 5 times as long to save the severely injured person. I didn’t mean to imply that the severity of the injury affects the moral calculation.
You’re right, this is like the trolley problem. When all 6 people are anonymous, we do the calculation and kill 1 to save 5. When the trolley problem is framed as “push the fat man off the bridge”, that’s enough personalization to trigger the other part of the brain.
Moral philosophy in general tries to find universal principles whose logical consequences agree with our moral intuition. The OP is saying that we can fix consequentialism by making the moral calculations more complicated. Good luck with that! If moral intuition comes from two different parts of the brain that don’t agree with each other, then we can always construct moral dilemmas by framing situations so that they activate one part or another of our brains.
“philosophy tries… to agree with our …intuition...”? Bravo! See, I think that’s crazy. Or if it’s right, it means we’re stipulating the intuition in the first place. Surely that’s wrong? Or at least, we can look back in time to see “obvious” moral postulates we no longer agree with. In science we come up with a theory and then test it in the wind tunnel or something. In philosophy, is our reference standard kilogram just an intuition? That’s unsatisfying!
I had fun with friends recently considering the trolley problem from a perspective of INaction. When it was an act of volition, even (say) just a warning shout, they (we) felt less compelled to let the fat man live. (He was already on the track and would have to be warned off, get it?) It seems we are responsible for what we do, not so much for what we elect NOT to do. Since the consequences are the same, it seems wrong that there is a perceptive difference. This highlights, I suppose the author’s presumed contention (consequentialism generally) that the correct ethical choice is obviously one of carefully (perhaps expensively!) calculated long term outcomes and equal to what feels right only coincidentally. I think in the limit, we would (consequentialists all) just walk into the hospital and ask for vivisection, since we’d save 5 lives. The reason I don’t isn’t JUST altruism, because I wouldn’t ask you to either, instead it’s a step closer to Kant’s absolutism: as humans we’re worth something more than ants (who I submit are all consequentialists?) and have individual value. I need to work on expressing this better...
I put in the different degrees of injury to set the context for the doctor’s choice… maybe it takes 5 times as long to save the severely injured person. I didn’t mean to imply that the severity of the injury affects the moral calculation.
You’re right, this is like the trolley problem. When all 6 people are anonymous, we do the calculation and kill 1 to save 5. When the trolley problem is framed as “push the fat man off the bridge”, that’s enough personalization to trigger the other part of the brain.
Moral philosophy in general tries to find universal principles whose logical consequences agree with our moral intuition. The OP is saying that we can fix consequentialism by making the moral calculations more complicated. Good luck with that! If moral intuition comes from two different parts of the brain that don’t agree with each other, then we can always construct moral dilemmas by framing situations so that they activate one part or another of our brains.
“philosophy tries… to agree with our …intuition...”? Bravo! See, I think that’s crazy. Or if it’s right, it means we’re stipulating the intuition in the first place. Surely that’s wrong? Or at least, we can look back in time to see “obvious” moral postulates we no longer agree with. In science we come up with a theory and then test it in the wind tunnel or something. In philosophy, is our reference standard kilogram just an intuition? That’s unsatisfying!
I had fun with friends recently considering the trolley problem from a perspective of INaction. When it was an act of volition, even (say) just a warning shout, they (we) felt less compelled to let the fat man live. (He was already on the track and would have to be warned off, get it?) It seems we are responsible for what we do, not so much for what we elect NOT to do. Since the consequences are the same, it seems wrong that there is a perceptive difference. This highlights, I suppose the author’s presumed contention (consequentialism generally) that the correct ethical choice is obviously one of carefully (perhaps expensively!) calculated long term outcomes and equal to what feels right only coincidentally. I think in the limit, we would (consequentialists all) just walk into the hospital and ask for vivisection, since we’d save 5 lives. The reason I don’t isn’t JUST altruism, because I wouldn’t ask you to either, instead it’s a step closer to Kant’s absolutism: as humans we’re worth something more than ants (who I submit are all consequentialists?) and have individual value. I need to work on expressing this better...