It ignores the global secondary effects that local choices create.
It ignores real human nature—which would be to freeze and be indecisive.
It usually gives you two choices and no alternatives, and in real life, there’s always alternatives.
I broadly agree with this, but there’s another reason trolley problems are flawed. Namely; it is hard to deconvolute one’s judgment of impracticality (a la 4) from one’s judgment of moral impermissibility. Pushing a fat guy is just such an implausibly stupid way to stop a trolley, and my intuition is going to keep shouting NO at that problem, no matter how much you verbally specify that I have perfect knowledge it will work.
I wonder if it’s better or worse to construct problems that are implausible from the very start, instead of being potentially realistic up to a certain point where you’re asked to suspend disbelief. (Similar to how we do decision problems here, with Omega being portrayed as a superintelligence from another galaxy who is nearly omniscient and whose sole goal appears to be giving people confusing decision problems. IIRC, conventional treatments of decision theory often portray the Predictor as a human and do not explain why his predictions tend to be accurate, only specifying that he has previously been right 99% or 100% of the time. I suspect that format tends to encourage people to make excuses not to answer the real question.) So, suppose instead of the traditional trolley problem, we say “An invincible demon appears before you with a hostage tied to a chair, and he gives you a gun. He tells you that you can shoot the hostage in the head or untie her and set her free, and that if and only if you set her free, he will go off and kill five other people at random. What do you do?” Does that make it better in worse, in terms of your ability to separate the implausibility of the situation from your ability to come to a moral judgment?
Your version adds an irrelevancy—the possible moral agency of the demon provides an out for the test subject: “It is not my fault those 5 people died; the demon did it.” It is much more difficult to shift moral responsibility to the trolley.
I’m not sure why it’s perceived as more difficult. The trolley didn’t just appear magically on the tracks. Someone put it there and set it moving (or negligently allowed it to).
Well, I perceive it as more difficult because of my intuitions about how culpability travels up a causal chain.
For example, if someone dies because of a bullet fired into their heart from a gun shot by a hand controlled by a brain B following an instruction given by agent A, my judgment of culpability travels unattenuated through the bullet and the gun and the hand. To what degree it grounds out in B and A depends on to what degree I consider B autonomous… if B is an advanced ballistics-targeting computer I might be willing to call it a brain but still unwilling to hold it culpable for the death, for example. Either way, the bulk of the culpability grounds out there. I may go further and look at the social structures and contingent history that led to A and B (and the hand, bullet, gun, heart, etc.) being the way they are, but that will at best be in addition to the initial judgment, and I probably won’t bother.
Similarly, if five people are hit by a trolley that rolled down a track that agent A chose not to stop, my intuitions of culpability ground out in A. Again, I may go further and look at the train switching systems and so on and so forth, but that will be in addition to the initial judgment, and I probably won’t bother.
I find it helpful to remember that intuitions about culpability are distinct from beliefs about responsibility.
I begin to suspect that it’s impossible to come up with a moral dilemma so implausibly simplified that nobody can possibly find a way to nitpick it. :P
(Though that one was just sloppiness on my part, I admit.)
Problem #6: the situations are almost invariably underspecified. (Problem 2 is a special case of this.) The moral judgments elicited depend on features that are not explicit, about which the reader can only make assumptions. Such as, how did the five people get on the tracks? Kidnapped and tied up by Dick Dastardly? Do they work for the railroad (and might they then also be responsible for the maintenance of the trolley)? And so on.
When a researcher uses contrived problems to test people’s moral intuitions, it would help to include a free-form question inviting the respondent to say what other information they need to form a moral judgment. That way, the next time the “trolley problem” is trotted out, the researchers will be in a better position to understand which features make a difference to the moral verdicts.
I broadly agree with this, but there’s another reason trolley problems are flawed. Namely; it is hard to deconvolute one’s judgment of impracticality (a la 4) from one’s judgment of moral impermissibility. Pushing a fat guy is just such an implausibly stupid way to stop a trolley, and my intuition is going to keep shouting NO at that problem, no matter how much you verbally specify that I have perfect knowledge it will work.
I wonder if it’s better or worse to construct problems that are implausible from the very start, instead of being potentially realistic up to a certain point where you’re asked to suspend disbelief. (Similar to how we do decision problems here, with Omega being portrayed as a superintelligence from another galaxy who is nearly omniscient and whose sole goal appears to be giving people confusing decision problems. IIRC, conventional treatments of decision theory often portray the Predictor as a human and do not explain why his predictions tend to be accurate, only specifying that he has previously been right 99% or 100% of the time. I suspect that format tends to encourage people to make excuses not to answer the real question.) So, suppose instead of the traditional trolley problem, we say “An invincible demon appears before you with a hostage tied to a chair, and he gives you a gun. He tells you that you can shoot the hostage in the head or untie her and set her free, and that if and only if you set her free, he will go off and kill five other people at random. What do you do?” Does that make it better in worse, in terms of your ability to separate the implausibility of the situation from your ability to come to a moral judgment?
Your version adds an irrelevancy—the possible moral agency of the demon provides an out for the test subject: “It is not my fault those 5 people died; the demon did it.” It is much more difficult to shift moral responsibility to the trolley.
Good point, though that still tests whether a person thinks of “moral agency” as a relevant factor in deciding what to do.
I’m not sure why it’s perceived as more difficult. The trolley didn’t just appear magically on the tracks. Someone put it there and set it moving (or negligently allowed it to).
Well, I perceive it as more difficult because of my intuitions about how culpability travels up a causal chain.
For example, if someone dies because of a bullet fired into their heart from a gun shot by a hand controlled by a brain B following an instruction given by agent A, my judgment of culpability travels unattenuated through the bullet and the gun and the hand. To what degree it grounds out in B and A depends on to what degree I consider B autonomous… if B is an advanced ballistics-targeting computer I might be willing to call it a brain but still unwilling to hold it culpable for the death, for example. Either way, the bulk of the culpability grounds out there. I may go further and look at the social structures and contingent history that led to A and B (and the hand, bullet, gun, heart, etc.) being the way they are, but that will at best be in addition to the initial judgment, and I probably won’t bother.
Similarly, if five people are hit by a trolley that rolled down a track that agent A chose not to stop, my intuitions of culpability ground out in A. Again, I may go further and look at the train switching systems and so on and so forth, but that will be in addition to the initial judgment, and I probably won’t bother.
I find it helpful to remember that intuitions about culpability are distinct from beliefs about responsibility.
Nitpick: why can’t I leave the hostage tied to the chair without shooting her?
I begin to suspect that it’s impossible to come up with a moral dilemma so implausibly simplified that nobody can possibly find a way to nitpick it. :P
(Though that one was just sloppiness on my part, I admit.)
Or untie her and then shoot her? ;)
Well, if you were a superintelligence from another galaxy who was nearly omniscient, what would YOU do with it?
Make paperclips, of course.
Fondly regard creation.
Problem #6: the situations are almost invariably underspecified. (Problem 2 is a special case of this.) The moral judgments elicited depend on features that are not explicit, about which the reader can only make assumptions. Such as, how did the five people get on the tracks? Kidnapped and tied up by Dick Dastardly? Do they work for the railroad (and might they then also be responsible for the maintenance of the trolley)? And so on.
When a researcher uses contrived problems to test people’s moral intuitions, it would help to include a free-form question inviting the respondent to say what other information they need to form a moral judgment. That way, the next time the “trolley problem” is trotted out, the researchers will be in a better position to understand which features make a difference to the moral verdicts.
ETA: didn’t see MatthewW’s similar point until after I replied.