Of course you can add factors to the thought experiment that even while
following utilitarianism will make you decide not to harvest the traveler for
his organs. But that’s also dodging the problem it tries to show—the problem that sometimes being strictly utilitarian leads to uncomfortable conclusions—that is,
conclusions that conflict with the kind of ‘folk’, ‘knee-jerk’ morality we seem
to have.
It depends what you consider the starting point for building the scenario. If you start by taking the story seriously as a real-world scenario, taking place in a real hospital with real people, then these are relevant considerations that would naturally arise as you were thinking through the problem, not additional factors that need to be added on. The work comes in removing factors to turn the scenario into an idealized thought experiment that boxes utilitarianism into one side, in opposition to our intuitive moral judgments. And if it’s necessary to make extensive or unrealistic stipulations in order to rule out seemingly important considerations, then that raises questions about how much we should be concerned about this thought experiment.
Sure. But what’s the point of putting “doctor” in the thought-experiment if it isn’t to arouse the particular associations that people have about doctors — one of which is the notion that doctors are trusted with unusual levels of access to other people’s bodies? It’s those associations that lead to people’s folk morality coming up with different answers to the “trolley” form and the “doctor” form of the problem.
A philosophical system of ethics that doesn’t add up to folk morality most of the time, over historical facts would be readily recognized as flawed, or even as not really ethics at all but something else. A system of ethics that added up to baby-eating, paperclipping, self-contradiction, or to the notion that it’s evil to have systems of ethics, for that matter — would not be the sort of ethics worth wanting.
Well, if a different thought experiment leads to different ‘folk-morality’-based conclusions while it doesn’t make a difference from a strictly utilitarian view point, that shows they are not fully compatible, or? Obviously, you can make them agree again by adding things, but that does not resolve the original problem.
For the success of an ethical system indeed it’s important to resonate with folk morality, but I also think the phenotype of everyday folk morality is a hodge-podge of biases and illusions. If we would take the basics of folk morality (what would that be, maybe… golden rule + utilitarianism?) I think something more consistent could be forged.
I have two responses, based on two different interpretations of your response:
I don’t see why a “strict utilitarian” would have to be a first-order utilitarian, i.e. one that only sees the immediate consequences of its actions and not the consequences of others’ responses to its actions. To avoid dealing with the social consequences (that is, extrapolated multi-actor consequences) of an act means to imagine it being performed in a moral vacuum: a place where nobody outside the imagined scene has any way of finding out what happened or responding to it. But breaking a rule when nobody is around to be injured by it or to report your rule-breaking to others, is a significantly different thought-experiment from breaking a rule under normal circumstances.
Humans (and, perhaps, AIs) are not designed to live in a world where the self is the only morally significant actor. They have to care about what other morally significant persons care about, and at more than a second-order level: they need to see not just smiling faces, but have the knowledge that there are minds behind those smiling faces.
And in any event, human cognition does not perform well in social or cognitive isolation: put a person in solitary confinement and you can predict that he or she will suffer and experience certain forms of cognitive breakdown; keep a scientific mind in isolation from a scientific community and you can expect that you will get a kook, not a genius.
Some people seem to treat the trolley problem and the doctor problem as the same problem stated in two different ways, in such a way as to expose a discrepancy in human moral reasoning. If so, this might be analogous to the Wason selection task, which exposes a discrepancy in human symbolic reasoning. Wason demonstrates that humans reason more effectively about applying social rules than about applying arithmetical rules isomorphic to those social rules. I’ve always imagined this as humans using a “social coprocessor” to evaluate rule-following, which is not engaged when thinking about an arithmetical problem.
Perhaps something similar is going on in the trolley problem and doctor problem: we are engaging a different “moral coprocessor” to think about one than the other. This difference may be captured by different schools of ethics: the doctor problem engages a “deontological coprocessor”, wherein facts such as a doctor’s social role and expected consequences of betrayal of duty are relevant. The trolley problem falls back to the “consequentialist coprocessor” for most readers, though, which computes: five dead is worse than one dead.
Perhaps the consequentialist coprocessor is a stupid, first-order utilitarian, whereas the deontological coprocessor deals with the limit case of others’ responses to our acts better.
Of course you can add factors to the thought experiment that even while following utilitarianism will make you decide not to harvest the traveler for his organs. But that’s also dodging the problem it tries to show—the problem that sometimes being strictly utilitarian leads to uncomfortable conclusions—that is, conclusions that conflict with the kind of ‘folk’, ‘knee-jerk’ morality we seem to have.
It depends what you consider the starting point for building the scenario. If you start by taking the story seriously as a real-world scenario, taking place in a real hospital with real people, then these are relevant considerations that would naturally arise as you were thinking through the problem, not additional factors that need to be added on. The work comes in removing factors to turn the scenario into an idealized thought experiment that boxes utilitarianism into one side, in opposition to our intuitive moral judgments. And if it’s necessary to make extensive or unrealistic stipulations in order to rule out seemingly important considerations, then that raises questions about how much we should be concerned about this thought experiment.
Sure. But what’s the point of putting “doctor” in the thought-experiment if it isn’t to arouse the particular associations that people have about doctors — one of which is the notion that doctors are trusted with unusual levels of access to other people’s bodies? It’s those associations that lead to people’s folk morality coming up with different answers to the “trolley” form and the “doctor” form of the problem.
A philosophical system of ethics that doesn’t add up to folk morality most of the time, over historical facts would be readily recognized as flawed, or even as not really ethics at all but something else. A system of ethics that added up to baby-eating, paperclipping, self-contradiction, or to the notion that it’s evil to have systems of ethics, for that matter — would not be the sort of ethics worth wanting.
Well, if a different thought experiment leads to different ‘folk-morality’-based conclusions while it doesn’t make a difference from a strictly utilitarian view point, that shows they are not fully compatible, or? Obviously, you can make them agree again by adding things, but that does not resolve the original problem.
For the success of an ethical system indeed it’s important to resonate with folk morality, but I also think the phenotype of everyday folk morality is a hodge-podge of biases and illusions. If we would take the basics of folk morality (what would that be, maybe… golden rule + utilitarianism?) I think something more consistent could be forged.
I have two responses, based on two different interpretations of your response:
I don’t see why a “strict utilitarian” would have to be a first-order utilitarian, i.e. one that only sees the immediate consequences of its actions and not the consequences of others’ responses to its actions. To avoid dealing with the social consequences (that is, extrapolated multi-actor consequences) of an act means to imagine it being performed in a moral vacuum: a place where nobody outside the imagined scene has any way of finding out what happened or responding to it. But breaking a rule when nobody is around to be injured by it or to report your rule-breaking to others, is a significantly different thought-experiment from breaking a rule under normal circumstances.
Humans (and, perhaps, AIs) are not designed to live in a world where the self is the only morally significant actor. They have to care about what other morally significant persons care about, and at more than a second-order level: they need to see not just smiling faces, but have the knowledge that there are minds behind those smiling faces.
And in any event, human cognition does not perform well in social or cognitive isolation: put a person in solitary confinement and you can predict that he or she will suffer and experience certain forms of cognitive breakdown; keep a scientific mind in isolation from a scientific community and you can expect that you will get a kook, not a genius.
Some people seem to treat the trolley problem and the doctor problem as the same problem stated in two different ways, in such a way as to expose a discrepancy in human moral reasoning. If so, this might be analogous to the Wason selection task, which exposes a discrepancy in human symbolic reasoning. Wason demonstrates that humans reason more effectively about applying social rules than about applying arithmetical rules isomorphic to those social rules. I’ve always imagined this as humans using a “social coprocessor” to evaluate rule-following, which is not engaged when thinking about an arithmetical problem.
Perhaps something similar is going on in the trolley problem and doctor problem: we are engaging a different “moral coprocessor” to think about one than the other. This difference may be captured by different schools of ethics: the doctor problem engages a “deontological coprocessor”, wherein facts such as a doctor’s social role and expected consequences of betrayal of duty are relevant. The trolley problem falls back to the “consequentialist coprocessor” for most readers, though, which computes: five dead is worse than one dead.
Perhaps the consequentialist coprocessor is a stupid, first-order utilitarian, whereas the deontological coprocessor deals with the limit case of others’ responses to our acts better.