I’ve used the trolley problem a lot, at first to show off my knowledge of moral philosophy, but later, when I realized anyone who knows any philosophy has already heard it, to shock friends that think they have a perfect and internally consistent moral system worked out. But I add a twist, which I stole from an episode of Radiolab (which got it from the last episode of MASH), that I think makes it a lot more effective; say you’re the mother of a baby in a village in Vietnam, and you’re hiding with the rest of the village from the Viet Cong. Your baby starts to cry, and you know if it does they’ll find you and kill the whole village. But, you could smother the baby (your baby!) and save everyone else. The size of the village can be adjusted up or down to hammer in the point. Crucially, I lie at first and say this is an actual historical event that really happened.
I usually save this one for people who smugly answer both trolly questions with “they’re the same, of course I’d kill one to save 5 in each case”, but it’s also remarkably effective at dispelling objections of implausibility and rejection of the experiment. I’m not sure why this works so well, but I think our bias toward narratives we can place ourselves in helps. Almost everyone at this point says they think they should kill the baby, but they just don’t think they could, to which I respond “Doesn’t the world make more sense when you realize you value thousands of complex things in a fuzzy and inconsistent manner?”. Unfortunately, I have yet to make friends with any true psychopaths. I’d be interested to hear their responses.
This is only equivalent to a trolley problem if you specify that the baby (but no one else) would be spared, should the Viet Cong find you. Otherwise, the baby is going to die anyway, unlike the lone person on the second trolley track who may live if you don’t flip the switch.
Great point. I’ve never thought of that and no-one I’ve ever tried this one has mentioned it either. This makes it more interesting to me that some people still wouldn’t kill the baby, but that may be for reasons other than real moral calculation.
For my own part: I have no idea whether I would kill the baby or not.
And I have even less of an idea whether anyone else would… I certainly don’t take giving answers like “I would kill the baby in this situation” as reliable evidence that the speaker would kill the baby in this situation.
But I generally understand trolley problems to be asking about what I think the right thing to do in situations like this is, not asking me to predict whether I will do the right thing in them.
I agree, I can’t really reliably predict my actions. I think I know the morally correct thing to do, but I’m skeptical of my (or anyone’s) ability to make reliable predictions about their actions under extreme stress. As I said, I usually use this when people seem overly confident of the consistency of their morality and their ability to follow it, as well as with people who question the plausibility of the original problem.
But I do recall the response distributions for this question mirroring the distribution for the second trolley problem; far fewer take the purely consequentialist view of morality than when they just have to flip a switch, even independent from their ability to act morally. I still don’t find it incredibly illuminating, as all it shows is that our moral intuitions are fundamentally fuzzy, or at least that we value things other than just how many people live or die.
Right before the massacre at My Lai, a squad of soldiers are pursuing a group of villagers. A scout sees them up ahead a small river and he sees that they are splitting and going into different directions. An elderly person goes to the left of the river and the five other villagers go to the right. The old one is trying to make a large trail in the jungle, so as to fool the pursuers.
The scout waits for a few minutes, when the rest of his squad team joins him. They are heading on the right side of the river and will probably continue on that way, risking to kill the five villagers. The scout signals to the others that they should go to the left. The party follows and they soon capture the elderly man and bring him back to the village center, where he is shot.
Should the scout instead have said nothing or kept running forward, so that his team should have killed the five villagers instead?
There are some problems with equating this to the trolley problem. First, the scout cannot know for certain before that his team is going in the direction of the large group. Second, the best solution may be to try and stop the squad, by faking a reason to go back to the village (saying the villagers must have run in a completely different direction).
Even then would be rather a lot different to a trolley problem. After all it involves asking a mother whether she would sacrifice her own child for the ‘greater good’. The only reasonable response I can think of for that question is a solid slap in the face! How dare they ask someone that!
I immediately thought, “Kill the baby.” No hesitation.
I happen to agree with you on morality being fuzzy and inconsistent. I’m definitely not a utilitarian. I don’t approve of policies of torture, for example. It’s just that the village obviously matters more than a goddamn baby. The trolley problem, being more abstract, is more confusing to me.
The answer that almost everyone gives seems to be very sensible. After all, the question: “What do I believe I would actually do” and “What do I think I should do” are different. Obviously self modifying to the point where these answers are as consistent as possible in the largest subset of scenarios as possible is probably a good thing, but that doesn’t mean such self modifying is easy.
Most mothers would simply be incapable of doing such a thing. If they could press a button to kill their baby, more would probably do so, just as more people would flip a switch to kill than push in front of a train.
You obviously should kill the baby, but it is much more difficult to honestly say you would kill a baby than flip a switch: the distinction is not one of morality but courage.
As a side note, I prefer the trolley-problem modification where you can have an innocent, healthy young traveler killed in order to save 5 people in need of organs. Saying “fat man”, at least for me, obfuscates the moral dilemma and makes it somewhat easier.
self modifying to the point where these answers are as consistent as possible in the largest subset of scenarios as possible…
...weighted by the likelihood of those scenarios, and the severities of the likely consequences of behaving inconsistently in those scenarios.
Most problems of this sort are phrased in ways that render the situation epistemicly unreachable, which makes their likelihood so low as to be worth ignoring.
Re: your side note… am I correct in understanding you to mean that you find imagining killing a fat man less uncomfortable than imagining killing a healthy young traveler?
but it’s also remarkably effective at dispelling objections of implausibility and rejection of the experiment.
If this were a real situation rather than an artificial moral dilemma, I’d say that if you can’t silence the baby just by covering its mouth, you should shake it. It gets them to stop making noise, and while it’s definitely not good for them, it’ll still give the baby better odds than being smothered to death.
I would smother the baby and then feel incredibly, irrationally guilty for weeks or months.
I am not a psychopath, but I am a utilitarian. I value having a consistent set of values more than I value any other factor that has come into conflict with that principle so far.
I am not a psychopath, but I am a utilitarian. I value having a consistent set of values more than I value any other factor that has come into conflict with that principle so far.
I usually save this one for people who smugly answer both trolly questions with “they’re the same, of course I’d kill one to save 5 in each case”, but it’s also remarkably effective at dispelling objections of implausibility and rejection of the experiment. I’m not sure why this works so well, but I think our bias toward narratives we can place ourselves in helps. Almost everyone at this point says they think they should kill the baby, but they just don’t think they could
The at this point part is interesting. Have you ever tried asking the question without the abstract priming? I’d like to see the difference.
I’ve used the trolley problem a lot, at first to show off my knowledge of moral philosophy, but later, when I realized anyone who knows any philosophy has already heard it, to shock friends that think they have a perfect and internally consistent moral system worked out. But I add a twist, which I stole from an episode of Radiolab (which got it from the last episode of MASH), that I think makes it a lot more effective; say you’re the mother of a baby in a village in Vietnam, and you’re hiding with the rest of the village from the Viet Cong. Your baby starts to cry, and you know if it does they’ll find you and kill the whole village. But, you could smother the baby (your baby!) and save everyone else. The size of the village can be adjusted up or down to hammer in the point. Crucially, I lie at first and say this is an actual historical event that really happened.
I usually save this one for people who smugly answer both trolly questions with “they’re the same, of course I’d kill one to save 5 in each case”, but it’s also remarkably effective at dispelling objections of implausibility and rejection of the experiment. I’m not sure why this works so well, but I think our bias toward narratives we can place ourselves in helps. Almost everyone at this point says they think they should kill the baby, but they just don’t think they could, to which I respond “Doesn’t the world make more sense when you realize you value thousands of complex things in a fuzzy and inconsistent manner?”. Unfortunately, I have yet to make friends with any true psychopaths. I’d be interested to hear their responses.
This is only equivalent to a trolley problem if you specify that the baby (but no one else) would be spared, should the Viet Cong find you. Otherwise, the baby is going to die anyway, unlike the lone person on the second trolley track who may live if you don’t flip the switch.
You could hack that in easily; surely most soldiers have qualms about killing babies.
Great point. I’ve never thought of that and no-one I’ve ever tried this one has mentioned it either. This makes it more interesting to me that some people still wouldn’t kill the baby, but that may be for reasons other than real moral calculation.
For my own part: I have no idea whether I would kill the baby or not.
And I have even less of an idea whether anyone else would… I certainly don’t take giving answers like “I would kill the baby in this situation” as reliable evidence that the speaker would kill the baby in this situation.
But I generally understand trolley problems to be asking about what I think the right thing to do in situations like this is, not asking me to predict whether I will do the right thing in them.
I agree, I can’t really reliably predict my actions. I think I know the morally correct thing to do, but I’m skeptical of my (or anyone’s) ability to make reliable predictions about their actions under extreme stress. As I said, I usually use this when people seem overly confident of the consistency of their morality and their ability to follow it, as well as with people who question the plausibility of the original problem.
But I do recall the response distributions for this question mirroring the distribution for the second trolley problem; far fewer take the purely consequentialist view of morality than when they just have to flip a switch, even independent from their ability to act morally. I still don’t find it incredibly illuminating, as all it shows is that our moral intuitions are fundamentally fuzzy, or at least that we value things other than just how many people live or die.
Maybe this can work as an analogy:
Right before the massacre at My Lai, a squad of soldiers are pursuing a group of villagers. A scout sees them up ahead a small river and he sees that they are splitting and going into different directions. An elderly person goes to the left of the river and the five other villagers go to the right. The old one is trying to make a large trail in the jungle, so as to fool the pursuers.
The scout waits for a few minutes, when the rest of his squad team joins him. They are heading on the right side of the river and will probably continue on that way, risking to kill the five villagers. The scout signals to the others that they should go to the left. The party follows and they soon capture the elderly man and bring him back to the village center, where he is shot.
Should the scout instead have said nothing or kept running forward, so that his team should have killed the five villagers instead?
There are some problems with equating this to the trolley problem. First, the scout cannot know for certain before that his team is going in the direction of the large group. Second, the best solution may be to try and stop the squad, by faking a reason to go back to the village (saying the villagers must have run in a completely different direction).
Even then would be rather a lot different to a trolley problem. After all it involves asking a mother whether she would sacrifice her own child for the ‘greater good’. The only reasonable response I can think of for that question is a solid slap in the face! How dare they ask someone that!
I immediately thought, “Kill the baby.” No hesitation.
I happen to agree with you on morality being fuzzy and inconsistent. I’m definitely not a utilitarian. I don’t approve of policies of torture, for example. It’s just that the village obviously matters more than a goddamn baby. The trolley problem, being more abstract, is more confusing to me.
They would say the same thing only with more sincerity.
The answer that almost everyone gives seems to be very sensible. After all, the question: “What do I believe I would actually do” and “What do I think I should do” are different. Obviously self modifying to the point where these answers are as consistent as possible in the largest subset of scenarios as possible is probably a good thing, but that doesn’t mean such self modifying is easy.
Most mothers would simply be incapable of doing such a thing. If they could press a button to kill their baby, more would probably do so, just as more people would flip a switch to kill than push in front of a train.
You obviously should kill the baby, but it is much more difficult to honestly say you would kill a baby than flip a switch: the distinction is not one of morality but courage.
As a side note, I prefer the trolley-problem modification where you can have an innocent, healthy young traveler killed in order to save 5 people in need of organs. Saying “fat man”, at least for me, obfuscates the moral dilemma and makes it somewhat easier.
...weighted by the likelihood of those scenarios, and the severities of the likely consequences of behaving inconsistently in those scenarios.
Most problems of this sort are phrased in ways that render the situation epistemicly unreachable, which makes their likelihood so low as to be worth ignoring.
Re: your side note… am I correct in understanding you to mean that you find imagining killing a fat man less uncomfortable than imagining killing a healthy young traveler?
If this were a real situation rather than an artificial moral dilemma, I’d say that if you can’t silence the baby just by covering its mouth, you should shake it. It gets them to stop making noise, and while it’s definitely not good for them, it’ll still give the baby better odds than being smothered to death.
I would smother the baby and then feel incredibly, irrationally guilty for weeks or months.
I am not a psychopath, but I am a utilitarian. I value having a consistent set of values more than I value any other factor that has come into conflict with that principle so far.
I hope I’d do the same. I’ve never had to kill anyone before though, much less my own baby, so I can’t be totally sure I’d be capable of it.
Utilitarian specifically or consequentialist?
Consequentialist; I should know better than to be imprecise about that here, especially because there are sad things I find to have great value.
The at this point part is interesting. Have you ever tried asking the question without the abstract priming? I’d like to see the difference.