Having run into this problem when presenting the trolly problem on many occasions, I’ve come to wonder whether or not it might just be the right kind of response: can we really address moral quandaries in the abstract? I suspect not, and that when people try to make these ad hoc adjustments to the scenario, they’re coming closer to thinking morally about the situation, just insofar as they’re imagining it as a real event with its stresses, uncertainties, and possibilities.
Maybe it’s just that that trolley problem is a really terrible example. It seems to be asking us to consider trains and/or people which operate under some other system of physics than the one we are familiar with.
Maybe an adjustment would make it better. How about this:
A runaway train carrying a load of ore is coming down the track and will hit 5 people, certainly killing them, unless a switch is activated which changes the train’s path. Unfortunately, the switch will activate only when a heavy load is placed on a connected pressure plate (set up this way so that when one train on track A drops off its cargo, the following train will be routed to track B). Furthermore, triggering the pressure plate has an unfortunate secondary effect; it causes a macerator to activate nearly instantly and chop up whatever is on the plate (typically raw ore) so that it can be sucked easily through a tube into a storage area, rather like a giant food disposal.
Standing next to the plate, you consider your options. You know, from your experience working on the site, that the plate and track switch system work quite reliably, but that you are too light to trigger it even if you tried jumping up and down. However, a very fat man is standing next to you; you are certain that he is heavy enough. With one shove, you could push him onto the plate, saving the lives of the five people on the tracks but causing his grisly death instead. Also, the switch’s design does not have any manual activation button near the plate itself; damn those cheap contractors!
There are only a few seconds before the train will pass the switch point, and from there only a few seconds until it hits the people on the track; not enough time to try anything clever with the mechanism, or for the 5 people to get out of the narrow canal in which the track runs. You frantically look around, but no other objects of any significant weight are nearby. What should you do?
That works, or at any rate I can’t think of plausible ways to get out of your scenario. My worry though is that people’s attempts to come up with alternatives is actually evidence that hypothetical moral problems have some basic flaw.
I’m having a hard time coming up with an example of what I mean, but suppose someone were to describe a non-existant person in great detail and ask you if you loved them. It’s not that you couldn’t love someone who fit that description, but rather that the kind of reasoning you would have to engage in to answer the question ‘do you love this person?’ just doesn’t work in the abstract.
So my thought was that maybe something similar is going on with these moral puzzles. This isn’t to say moral theories aren’t worthwhile, but rather that the conditions necessary for their rational application exclude hypotheticals.
Having run into this problem when presenting the trolly problem on many occasions, I’ve come to wonder whether or not it might just be the right kind of response: can we really address moral quandaries in the abstract? I suspect not, and that when people try to make these ad hoc adjustments to the scenario, they’re coming closer to thinking morally about the situation, just insofar as they’re imagining it as a real event with its stresses, uncertainties, and possibilities.
Maybe it’s just that that trolley problem is a really terrible example. It seems to be asking us to consider trains and/or people which operate under some other system of physics than the one we are familiar with.
Maybe an adjustment would make it better. How about this:
A runaway train carrying a load of ore is coming down the track and will hit 5 people, certainly killing them, unless a switch is activated which changes the train’s path. Unfortunately, the switch will activate only when a heavy load is placed on a connected pressure plate (set up this way so that when one train on track A drops off its cargo, the following train will be routed to track B). Furthermore, triggering the pressure plate has an unfortunate secondary effect; it causes a macerator to activate nearly instantly and chop up whatever is on the plate (typically raw ore) so that it can be sucked easily through a tube into a storage area, rather like a giant food disposal.
Standing next to the plate, you consider your options. You know, from your experience working on the site, that the plate and track switch system work quite reliably, but that you are too light to trigger it even if you tried jumping up and down. However, a very fat man is standing next to you; you are certain that he is heavy enough. With one shove, you could push him onto the plate, saving the lives of the five people on the tracks but causing his grisly death instead. Also, the switch’s design does not have any manual activation button near the plate itself; damn those cheap contractors!
There are only a few seconds before the train will pass the switch point, and from there only a few seconds until it hits the people on the track; not enough time to try anything clever with the mechanism, or for the 5 people to get out of the narrow canal in which the track runs. You frantically look around, but no other objects of any significant weight are nearby. What should you do?
That works, or at any rate I can’t think of plausible ways to get out of your scenario. My worry though is that people’s attempts to come up with alternatives is actually evidence that hypothetical moral problems have some basic flaw.
I’m having a hard time coming up with an example of what I mean, but suppose someone were to describe a non-existant person in great detail and ask you if you loved them. It’s not that you couldn’t love someone who fit that description, but rather that the kind of reasoning you would have to engage in to answer the question ‘do you love this person?’ just doesn’t work in the abstract.
So my thought was that maybe something similar is going on with these moral puzzles. This isn’t to say moral theories aren’t worthwhile, but rather that the conditions necessary for their rational application exclude hypotheticals.
It’s not a flaw in the hypotheticals. Rather, it’s a healthy desire in humans to find better tradeoffs than the ones initially presented to them.