Hypotheses: The hospital scenario involves a lot more decisions, so it seems as though there’s more rule breaking.
You want a hard rule that medical personnel won’t do injury to the people in their hands.
The trolley scenario evokes prejudice against fat people. It needs variants like redirecting another trolley that has many fewer people in it to block the first trolley, or perhaps knocking two football players (how?) into the trolley’s path.
You want a hard rule that medical personnel won’t do injury to the people in their hands.
Specifically: You don’t want future people to avoid seeking medical treatment — or to burn doctors at the stake — out of legitimate fear of being taken apart for their organs. Even if you tell the victims that it’s in the greater good for doctors to do that once in a while, the victims’ goals aren’t served by being sacrificed for the good of five strangers. The victims’ goals are much better served by going without a medical checkup, or possibly leading a mob to kill all the doctors.
There is a consequentialist reason to treat human beings as ends rather than means: If a human figures out that you intend to treat him or her as a means, this elicits a whole swath of evolved-in responses that will interfere with your intentions. These range from negotiation (“If you want to treat me as a means, I get to treat you as a means too”), to resistance (“I will stop you from doing that to me”), to outright violence (“I will stop you, so you don’t do that to anyone”).
Of course you can add factors to the thought experiment that even while
following utilitarianism will make you decide not to harvest the traveler for
his organs. But that’s also dodging the problem it tries to show—the problem that sometimes being strictly utilitarian leads to uncomfortable conclusions—that is,
conclusions that conflict with the kind of ‘folk’, ‘knee-jerk’ morality we seem
to have.
It depends what you consider the starting point for building the scenario. If you start by taking the story seriously as a real-world scenario, taking place in a real hospital with real people, then these are relevant considerations that would naturally arise as you were thinking through the problem, not additional factors that need to be added on. The work comes in removing factors to turn the scenario into an idealized thought experiment that boxes utilitarianism into one side, in opposition to our intuitive moral judgments. And if it’s necessary to make extensive or unrealistic stipulations in order to rule out seemingly important considerations, then that raises questions about how much we should be concerned about this thought experiment.
Sure. But what’s the point of putting “doctor” in the thought-experiment if it isn’t to arouse the particular associations that people have about doctors — one of which is the notion that doctors are trusted with unusual levels of access to other people’s bodies? It’s those associations that lead to people’s folk morality coming up with different answers to the “trolley” form and the “doctor” form of the problem.
A philosophical system of ethics that doesn’t add up to folk morality most of the time, over historical facts would be readily recognized as flawed, or even as not really ethics at all but something else. A system of ethics that added up to baby-eating, paperclipping, self-contradiction, or to the notion that it’s evil to have systems of ethics, for that matter — would not be the sort of ethics worth wanting.
Well, if a different thought experiment leads to different ‘folk-morality’-based conclusions while it doesn’t make a difference from a strictly utilitarian view point, that shows they are not fully compatible, or? Obviously, you can make them agree again by adding things, but that does not resolve the original problem.
For the success of an ethical system indeed it’s important to resonate with folk morality, but I also think the phenotype of everyday folk morality is a hodge-podge of biases and illusions. If we would take the basics of folk morality (what would that be, maybe… golden rule + utilitarianism?) I think something more consistent could be forged.
I have two responses, based on two different interpretations of your response:
I don’t see why a “strict utilitarian” would have to be a first-order utilitarian, i.e. one that only sees the immediate consequences of its actions and not the consequences of others’ responses to its actions. To avoid dealing with the social consequences (that is, extrapolated multi-actor consequences) of an act means to imagine it being performed in a moral vacuum: a place where nobody outside the imagined scene has any way of finding out what happened or responding to it. But breaking a rule when nobody is around to be injured by it or to report your rule-breaking to others, is a significantly different thought-experiment from breaking a rule under normal circumstances.
Humans (and, perhaps, AIs) are not designed to live in a world where the self is the only morally significant actor. They have to care about what other morally significant persons care about, and at more than a second-order level: they need to see not just smiling faces, but have the knowledge that there are minds behind those smiling faces.
And in any event, human cognition does not perform well in social or cognitive isolation: put a person in solitary confinement and you can predict that he or she will suffer and experience certain forms of cognitive breakdown; keep a scientific mind in isolation from a scientific community and you can expect that you will get a kook, not a genius.
Some people seem to treat the trolley problem and the doctor problem as the same problem stated in two different ways, in such a way as to expose a discrepancy in human moral reasoning. If so, this might be analogous to the Wason selection task, which exposes a discrepancy in human symbolic reasoning. Wason demonstrates that humans reason more effectively about applying social rules than about applying arithmetical rules isomorphic to those social rules. I’ve always imagined this as humans using a “social coprocessor” to evaluate rule-following, which is not engaged when thinking about an arithmetical problem.
Perhaps something similar is going on in the trolley problem and doctor problem: we are engaging a different “moral coprocessor” to think about one than the other. This difference may be captured by different schools of ethics: the doctor problem engages a “deontological coprocessor”, wherein facts such as a doctor’s social role and expected consequences of betrayal of duty are relevant. The trolley problem falls back to the “consequentialist coprocessor” for most readers, though, which computes: five dead is worse than one dead.
Perhaps the consequentialist coprocessor is a stupid, first-order utilitarian, whereas the deontological coprocessor deals with the limit case of others’ responses to our acts better.
Utilitarians can thereby extricate themselves from their prima facie conclusion that it’s right to kill the innocent man. However, the solution has the form: “We cannot do what is best utility-wise because others, who are not utilitarians, will respond in ways that damage utility to an even greater extent than we have increased it.”
However, this kind of solution doesn’t speak very well for utilitarianism, for consider an alternative: “We cannot do what is best paperclip-wise because others, who are not paperclippers, will respond in ways that tend to reduce the long term future paperclip-count.”
In fact, Clippy can ‘get the answer right’ on a surprisingly high proportion of moral questions if he is prepared to be circumspect, consider the long term, and keep in mind that no-one but him is maximizing paperclips.
But then this raises the question: Assuming we lived in a society of utilitarians, who feel no irrational fear at the thought of being harvested for the greater good, and no moral indignation when others are so harvested, would this practice be ‘right’? Would that entire society be ‘morally preferable’ to ours?
There is an alternate version where the man on the footbridge is wearing a heavy backpack, rather than being fat. That’s the scenario that Josh Greene & colleagues used in this paper, for instance.
The overt reason for pushing a fat man is that it’s the way to only kill one person while mustering sufficient weight to stop the trolley. It seems plausible that what’s intended is a very fat man, or you could just specify large person.
Two football players seems like a way of specifying a substantial amount of weight while involving few people.
Hypotheses: The hospital scenario involves a lot more decisions, so it seems as though there’s more rule breaking.
You want a hard rule that medical personnel won’t do injury to the people in their hands.
The trolley scenario evokes prejudice against fat people. It needs variants like redirecting another trolley that has many fewer people in it to block the first trolley, or perhaps knocking two football players (how?) into the trolley’s path.
Specifically: You don’t want future people to avoid seeking medical treatment — or to burn doctors at the stake — out of legitimate fear of being taken apart for their organs. Even if you tell the victims that it’s in the greater good for doctors to do that once in a while, the victims’ goals aren’t served by being sacrificed for the good of five strangers. The victims’ goals are much better served by going without a medical checkup, or possibly leading a mob to kill all the doctors.
There is a consequentialist reason to treat human beings as ends rather than means: If a human figures out that you intend to treat him or her as a means, this elicits a whole swath of evolved-in responses that will interfere with your intentions. These range from negotiation (“If you want to treat me as a means, I get to treat you as a means too”), to resistance (“I will stop you from doing that to me”), to outright violence (“I will stop you, so you don’t do that to anyone”).
Of course you can add factors to the thought experiment that even while following utilitarianism will make you decide not to harvest the traveler for his organs. But that’s also dodging the problem it tries to show—the problem that sometimes being strictly utilitarian leads to uncomfortable conclusions—that is, conclusions that conflict with the kind of ‘folk’, ‘knee-jerk’ morality we seem to have.
It depends what you consider the starting point for building the scenario. If you start by taking the story seriously as a real-world scenario, taking place in a real hospital with real people, then these are relevant considerations that would naturally arise as you were thinking through the problem, not additional factors that need to be added on. The work comes in removing factors to turn the scenario into an idealized thought experiment that boxes utilitarianism into one side, in opposition to our intuitive moral judgments. And if it’s necessary to make extensive or unrealistic stipulations in order to rule out seemingly important considerations, then that raises questions about how much we should be concerned about this thought experiment.
Sure. But what’s the point of putting “doctor” in the thought-experiment if it isn’t to arouse the particular associations that people have about doctors — one of which is the notion that doctors are trusted with unusual levels of access to other people’s bodies? It’s those associations that lead to people’s folk morality coming up with different answers to the “trolley” form and the “doctor” form of the problem.
A philosophical system of ethics that doesn’t add up to folk morality most of the time, over historical facts would be readily recognized as flawed, or even as not really ethics at all but something else. A system of ethics that added up to baby-eating, paperclipping, self-contradiction, or to the notion that it’s evil to have systems of ethics, for that matter — would not be the sort of ethics worth wanting.
Well, if a different thought experiment leads to different ‘folk-morality’-based conclusions while it doesn’t make a difference from a strictly utilitarian view point, that shows they are not fully compatible, or? Obviously, you can make them agree again by adding things, but that does not resolve the original problem.
For the success of an ethical system indeed it’s important to resonate with folk morality, but I also think the phenotype of everyday folk morality is a hodge-podge of biases and illusions. If we would take the basics of folk morality (what would that be, maybe… golden rule + utilitarianism?) I think something more consistent could be forged.
I have two responses, based on two different interpretations of your response:
I don’t see why a “strict utilitarian” would have to be a first-order utilitarian, i.e. one that only sees the immediate consequences of its actions and not the consequences of others’ responses to its actions. To avoid dealing with the social consequences (that is, extrapolated multi-actor consequences) of an act means to imagine it being performed in a moral vacuum: a place where nobody outside the imagined scene has any way of finding out what happened or responding to it. But breaking a rule when nobody is around to be injured by it or to report your rule-breaking to others, is a significantly different thought-experiment from breaking a rule under normal circumstances.
Humans (and, perhaps, AIs) are not designed to live in a world where the self is the only morally significant actor. They have to care about what other morally significant persons care about, and at more than a second-order level: they need to see not just smiling faces, but have the knowledge that there are minds behind those smiling faces.
And in any event, human cognition does not perform well in social or cognitive isolation: put a person in solitary confinement and you can predict that he or she will suffer and experience certain forms of cognitive breakdown; keep a scientific mind in isolation from a scientific community and you can expect that you will get a kook, not a genius.
Some people seem to treat the trolley problem and the doctor problem as the same problem stated in two different ways, in such a way as to expose a discrepancy in human moral reasoning. If so, this might be analogous to the Wason selection task, which exposes a discrepancy in human symbolic reasoning. Wason demonstrates that humans reason more effectively about applying social rules than about applying arithmetical rules isomorphic to those social rules. I’ve always imagined this as humans using a “social coprocessor” to evaluate rule-following, which is not engaged when thinking about an arithmetical problem.
Perhaps something similar is going on in the trolley problem and doctor problem: we are engaging a different “moral coprocessor” to think about one than the other. This difference may be captured by different schools of ethics: the doctor problem engages a “deontological coprocessor”, wherein facts such as a doctor’s social role and expected consequences of betrayal of duty are relevant. The trolley problem falls back to the “consequentialist coprocessor” for most readers, though, which computes: five dead is worse than one dead.
Perhaps the consequentialist coprocessor is a stupid, first-order utilitarian, whereas the deontological coprocessor deals with the limit case of others’ responses to our acts better.
Utilitarians can thereby extricate themselves from their prima facie conclusion that it’s right to kill the innocent man. However, the solution has the form: “We cannot do what is best utility-wise because others, who are not utilitarians, will respond in ways that damage utility to an even greater extent than we have increased it.”
However, this kind of solution doesn’t speak very well for utilitarianism, for consider an alternative: “We cannot do what is best paperclip-wise because others, who are not paperclippers, will respond in ways that tend to reduce the long term future paperclip-count.”
In fact, Clippy can ‘get the answer right’ on a surprisingly high proportion of moral questions if he is prepared to be circumspect, consider the long term, and keep in mind that no-one but him is maximizing paperclips.
But then this raises the question: Assuming we lived in a society of utilitarians, who feel no irrational fear at the thought of being harvested for the greater good, and no moral indignation when others are so harvested, would this practice be ‘right’? Would that entire society be ‘morally preferable’ to ours?
There is an alternate version where the man on the footbridge is wearing a heavy backpack, rather than being fat. That’s the scenario that Josh Greene & colleagues used in this paper, for instance.
Fat people are OK but you have a problem with football players?
There are an awful lot of people who are interested in decision problems who might just say “push ’em” as group-affiliation humor!
The overt reason for pushing a fat man is that it’s the way to only kill one person while mustering sufficient weight to stop the trolley. It seems plausible that what’s intended is a very fat man, or you could just specify large person.
Two football players seems like a way of specifying a substantial amount of weight while involving few people.