There’s another version of the trolley problem that’s even squickier than the “push a man onto the track” version...
“A brilliant transplant surgeon has five patients, each in need of a different organ, each of whom will die without that organ. Unfortunately, there are no organs available to perform any of these five transplant operations. A healthy young traveler, just passing through the city the doctor works in, comes in for a routine checkup. In the course of doing the checkup, the doctor discovers that his organs are compatible with all five of his dying patients. Suppose further that if the young man were to disappear, no one would suspect the doctor.”
-- Judith Jarvis Thomson, The Trolley Problem, 94 Yale Law Journal 1395-1415 (1985)
For some reason, it’s a lot less comfortable to endorse murdering the patient than it is to endorse pushing the fat man onto the track...
That one was raised by a visiting philosopher at my college as an argument (from intuition) against utilitarianism. I pointed out that if we tended to kill patients to harvest them to save more patients, people would be so fearful of being harvested that they would tend not to visit hospitals at all, leading to a greater loss of health and life. So in this case, in any realistic formulation, the less comfortable option is also the one that leads to less utility.
I suspect that this version feels even less comfortable than the trolley dilemma because it includes the violation of an implicit social contract, that if you go into a hospital, they’ll try to make you healthier, not kill you. But while violating implicit social contracts tends to be a bad idea, that’s certainly not to say that there’s any guarantee that the utilitarian thing to do in some situations won’t be massively uncomfortable.
There are a number of science fiction stories about uncomfortable utilitarian choices. “The Cold Equations” is the most famous. I think Heinlein wrote a novel that had a character who was in charge of a colony that ran out of power, and so he killed half of them in order for the remaining life support to be enough to let the others live until relief arrived. No one stopped him at the time, but after they were safe, they branded him a war criminal or something like that.
I don’t think that’s a Heinlein. I don’t have a specific memory of that story, and his work didn’t tend to be that bleak. I’m willing to be surprised if someone has a specific reference.
There’s also Eliezer’s Three Worlds Collide, which has a short aside on ships trying to take on just one more passenger and getting caught in the nova. And I think the movie Titanic had an officer cold-bloodedly executing a man who tried to get onto a full lifeboat, potentially sinking it.
It’s possible that you are referring to the secondary plot line of Chasm City by Alaistair Reynolds in which gur nagvureb wrggvfbaf unys gur uvoreangvba cbqf va uvf fgnefuvc, nyybjvat vg gb neevir orsber gur bguref va gur syrrg naq fb tnva zvyvgnel nqinagntr.
Alistair Reynold’s “Chasm City” has a similar back-story. Several colony ships are heading to a new planet, but after generations in space have developed cold-war style hostilities. The captain of one of the ships kills half the cryo-preserved colonists and jettisons their weight so he doesn’t have to slow his ship as soon as the other three. Arriving several weeks before the rest, his colonists get all the best colony landing spots and dominate the planet. He is immediately captured and executed as a war criminal, but generations later people view him with mixed emotions—a bit of a monster, yet one who sacrificed himself in order that his people could win the planet.
If the likelihood of me needing a life-saving organ transplant at some point in my life is the same as for most other people, then I think I’d bite the bullet and agree to a system in which random healthy people are killed for their organs. Why? Because I’d have 5x the chance of being saved than being killed.
I remember a short story—title and author escape me—where this was actually much like what was going on. Everyone had their relevant types on file, and if some number of people needed an organ you could supply, you were harvested for said organs. The protagonist got notified that there were nearly enough people who needed his organs and he went undercover and visited them all, thinking he’d kill one and get out of it, but he finds that they aren’t what he expected (e.g. the one who needs a liver is a girl with hepatitis, not some drunk) and decides not to, and then one dies anyway and he’s off the hook.
Larry Niven wrote a number of short stories about organ transplants; in one of them, “The Jigsaw Man”, the primary source of organs for transplant is executions of criminals, which has led to more and more crimes being punishable by death. The main character of the story, who is currently in jail and awaiting trial, escapes through what amounts a stroke of luck, and finds out that the organ banks are right next to the jail. Certain that he is about to be recaptured and eventually executed, he decides to commit a crime worthy of the punishment he is going to receive: destroying a large number of the preserved organs. At the end of the story, he’s brought to trial only for the crime he originally committed: running red lights.
In current practice, organ transplant recipients are typically old people who die shortly after receiving the transplant. The problem is still interesting; but you have to impose some artificial restrictions.
In current practice, organ transplant recipients are typically old people who die shortly after receiving the transplant. The problem is still interesting; but you have to impose some artificial restrictions.
Sure, it’s just a thought experiment, like trolley problems. I’ve seen it used in arguments against consequentialism/utilitarianism, but I’m not sure how many of utilitarians bite this bullet (I guess it depends what type of consequentialist/utilitarian you are).
Hypotheses: The hospital scenario involves a lot more decisions, so it seems as though there’s more rule breaking.
You want a hard rule that medical personnel won’t do injury to the people in their hands.
The trolley scenario evokes prejudice against fat people. It needs variants like redirecting another trolley that has many fewer people in it to block the first trolley, or perhaps knocking two football players (how?) into the trolley’s path.
You want a hard rule that medical personnel won’t do injury to the people in their hands.
Specifically: You don’t want future people to avoid seeking medical treatment — or to burn doctors at the stake — out of legitimate fear of being taken apart for their organs. Even if you tell the victims that it’s in the greater good for doctors to do that once in a while, the victims’ goals aren’t served by being sacrificed for the good of five strangers. The victims’ goals are much better served by going without a medical checkup, or possibly leading a mob to kill all the doctors.
There is a consequentialist reason to treat human beings as ends rather than means: If a human figures out that you intend to treat him or her as a means, this elicits a whole swath of evolved-in responses that will interfere with your intentions. These range from negotiation (“If you want to treat me as a means, I get to treat you as a means too”), to resistance (“I will stop you from doing that to me”), to outright violence (“I will stop you, so you don’t do that to anyone”).
Of course you can add factors to the thought experiment that even while
following utilitarianism will make you decide not to harvest the traveler for
his organs. But that’s also dodging the problem it tries to show—the problem that sometimes being strictly utilitarian leads to uncomfortable conclusions—that is,
conclusions that conflict with the kind of ‘folk’, ‘knee-jerk’ morality we seem
to have.
It depends what you consider the starting point for building the scenario. If you start by taking the story seriously as a real-world scenario, taking place in a real hospital with real people, then these are relevant considerations that would naturally arise as you were thinking through the problem, not additional factors that need to be added on. The work comes in removing factors to turn the scenario into an idealized thought experiment that boxes utilitarianism into one side, in opposition to our intuitive moral judgments. And if it’s necessary to make extensive or unrealistic stipulations in order to rule out seemingly important considerations, then that raises questions about how much we should be concerned about this thought experiment.
Sure. But what’s the point of putting “doctor” in the thought-experiment if it isn’t to arouse the particular associations that people have about doctors — one of which is the notion that doctors are trusted with unusual levels of access to other people’s bodies? It’s those associations that lead to people’s folk morality coming up with different answers to the “trolley” form and the “doctor” form of the problem.
A philosophical system of ethics that doesn’t add up to folk morality most of the time, over historical facts would be readily recognized as flawed, or even as not really ethics at all but something else. A system of ethics that added up to baby-eating, paperclipping, self-contradiction, or to the notion that it’s evil to have systems of ethics, for that matter — would not be the sort of ethics worth wanting.
Well, if a different thought experiment leads to different ‘folk-morality’-based conclusions while it doesn’t make a difference from a strictly utilitarian view point, that shows they are not fully compatible, or? Obviously, you can make them agree again by adding things, but that does not resolve the original problem.
For the success of an ethical system indeed it’s important to resonate with folk morality, but I also think the phenotype of everyday folk morality is a hodge-podge of biases and illusions. If we would take the basics of folk morality (what would that be, maybe… golden rule + utilitarianism?) I think something more consistent could be forged.
I have two responses, based on two different interpretations of your response:
I don’t see why a “strict utilitarian” would have to be a first-order utilitarian, i.e. one that only sees the immediate consequences of its actions and not the consequences of others’ responses to its actions. To avoid dealing with the social consequences (that is, extrapolated multi-actor consequences) of an act means to imagine it being performed in a moral vacuum: a place where nobody outside the imagined scene has any way of finding out what happened or responding to it. But breaking a rule when nobody is around to be injured by it or to report your rule-breaking to others, is a significantly different thought-experiment from breaking a rule under normal circumstances.
Humans (and, perhaps, AIs) are not designed to live in a world where the self is the only morally significant actor. They have to care about what other morally significant persons care about, and at more than a second-order level: they need to see not just smiling faces, but have the knowledge that there are minds behind those smiling faces.
And in any event, human cognition does not perform well in social or cognitive isolation: put a person in solitary confinement and you can predict that he or she will suffer and experience certain forms of cognitive breakdown; keep a scientific mind in isolation from a scientific community and you can expect that you will get a kook, not a genius.
Some people seem to treat the trolley problem and the doctor problem as the same problem stated in two different ways, in such a way as to expose a discrepancy in human moral reasoning. If so, this might be analogous to the Wason selection task, which exposes a discrepancy in human symbolic reasoning. Wason demonstrates that humans reason more effectively about applying social rules than about applying arithmetical rules isomorphic to those social rules. I’ve always imagined this as humans using a “social coprocessor” to evaluate rule-following, which is not engaged when thinking about an arithmetical problem.
Perhaps something similar is going on in the trolley problem and doctor problem: we are engaging a different “moral coprocessor” to think about one than the other. This difference may be captured by different schools of ethics: the doctor problem engages a “deontological coprocessor”, wherein facts such as a doctor’s social role and expected consequences of betrayal of duty are relevant. The trolley problem falls back to the “consequentialist coprocessor” for most readers, though, which computes: five dead is worse than one dead.
Perhaps the consequentialist coprocessor is a stupid, first-order utilitarian, whereas the deontological coprocessor deals with the limit case of others’ responses to our acts better.
Utilitarians can thereby extricate themselves from their prima facie conclusion that it’s right to kill the innocent man. However, the solution has the form: “We cannot do what is best utility-wise because others, who are not utilitarians, will respond in ways that damage utility to an even greater extent than we have increased it.”
However, this kind of solution doesn’t speak very well for utilitarianism, for consider an alternative: “We cannot do what is best paperclip-wise because others, who are not paperclippers, will respond in ways that tend to reduce the long term future paperclip-count.”
In fact, Clippy can ‘get the answer right’ on a surprisingly high proportion of moral questions if he is prepared to be circumspect, consider the long term, and keep in mind that no-one but him is maximizing paperclips.
But then this raises the question: Assuming we lived in a society of utilitarians, who feel no irrational fear at the thought of being harvested for the greater good, and no moral indignation when others are so harvested, would this practice be ‘right’? Would that entire society be ‘morally preferable’ to ours?
There is an alternate version where the man on the footbridge is wearing a heavy backpack, rather than being fat. That’s the scenario that Josh Greene & colleagues used in this paper, for instance.
The overt reason for pushing a fat man is that it’s the way to only kill one person while mustering sufficient weight to stop the trolley. It seems plausible that what’s intended is a very fat man, or you could just specify large person.
Two football players seems like a way of specifying a substantial amount of weight while involving few people.
Two additional things are in play here:
1) As others said, there’s a breach of an implicit social contract, which explains some squeamishness
2) In this scenario, the “normal” person is the young traveler, he’s the one readers are likely to associate with.
I’d be inclined to bite the bullet too, i.e. I might prefer living in a society in which things like that happen, provided it really is better (i.e. it doesn’t just result in less people visiting doctors etc.).
But in this specific scenario, there would be a better solution: the doctor offers to draw lots among the patients to know which of them will is sacrificed to have his organs distributed among the remaining four; so the patients have a choice between agreeing to that (80% chances of survival) and certain death.
But in this specific scenario, there would be a better solution: the doctor offers to draw lots among the patients to know which of them will is sacrificed to have his organs distributed among the remaining four; so the patients have a choice between agreeing to that (80% chances of survival) and certain death.
I like this idea. For the thought experiment at hand, though, it seems too convenient.
Suppose the dying patients’ organs are mutually incompatible with each other; only the young traveler’s organs will do. In that scenario, should the traveler’s organs be distributed?
There’s probably a least convenient possible world in which I’d bite the bullet and agree that it might be right for the doctor to kill the patient.
Suppose that on planets J and K, doctors are robots, and that it’s common knowledge that they are “friendly” consequentialists who take the actions that maximize the expected health of their patients (“friendly” in the sense that they are “good genies” whose utility function matches human morality, i.e. they don’t save the life of a patient that wants to die, don’t value “vegetables” as much, etc.).
But on planet J, robot doctors treat each patient in isolation, maximizing his expected health, whereas on planet K doctors maximize the expected health of their patients as a whole, even if that means killing one to save five others.
I would prefer to live on planet K than on planet J, because even if there’s a small probability p that I’ll have my organs harvested to save five other patients, there’s also a probability 5 * p that my life will be saved by a robot doctor’s cold utilitarian calculation.
“friendly” in the sense that they are “good genies” whose utility function matches human morality, i.e. they don’t save the life of a patient that wants to die, don’t value “vegetables” as much, etc.
Does this include putting less value on patients who would only live a short while longer (say, a year) with a transplant than without? AIUI this is typical of transplant patients.
Probably yes, which would mean that in many cases the sacrifice wouldn’t be made (though—least convenient possible world again—there are cases where it would).
I’m not. If I hadn’t heard about this or the trolly problem or equivalent I’d probably do it without thinking and then be surprised when people criticised the decision.
There’s another version of the trolley problem that’s even squickier than the “push a man onto the track” version...
-- Judith Jarvis Thomson, The Trolley Problem, 94 Yale Law Journal 1395-1415 (1985)
For some reason, it’s a lot less comfortable to endorse murdering the patient than it is to endorse pushing the fat man onto the track...
That one was raised by a visiting philosopher at my college as an argument (from intuition) against utilitarianism. I pointed out that if we tended to kill patients to harvest them to save more patients, people would be so fearful of being harvested that they would tend not to visit hospitals at all, leading to a greater loss of health and life. So in this case, in any realistic formulation, the less comfortable option is also the one that leads to less utility.
I suspect that this version feels even less comfortable than the trolley dilemma because it includes the violation of an implicit social contract, that if you go into a hospital, they’ll try to make you healthier, not kill you. But while violating implicit social contracts tends to be a bad idea, that’s certainly not to say that there’s any guarantee that the utilitarian thing to do in some situations won’t be massively uncomfortable.
There are a number of science fiction stories about uncomfortable utilitarian choices. “The Cold Equations” is the most famous. I think Heinlein wrote a novel that had a character who was in charge of a colony that ran out of power, and so he killed half of them in order for the remaining life support to be enough to let the others live until relief arrived. No one stopped him at the time, but after they were safe, they branded him a war criminal or something like that.
I don’t think that’s a Heinlein. I don’t have a specific memory of that story, and his work didn’t tend to be that bleak. I’m willing to be surprised if someone has a specific reference.
There’s also Eliezer’s Three Worlds Collide, which has a short aside on ships trying to take on just one more passenger and getting caught in the nova. And I think the movie Titanic had an officer cold-bloodedly executing a man who tried to get onto a full lifeboat, potentially sinking it.
It’s possible that you are referring to the secondary plot line of Chasm City by Alaistair Reynolds in which gur nagvureb wrggvfbaf unys gur uvoreangvba cbqf va uvf fgnefuvc, nyybjvat vg gb neevir orsber gur bguref va gur syrrg naq fb tnva zvyvgnel nqinagntr.
No, that’s different. I was referring to a commander who saved lives, but was condemned for doing that instead of letting everybody die.
Does less-wrong have rot13 functionality built in?
No, I used http://www.rot13.com .
Alistair Reynold’s “Chasm City” has a similar back-story. Several colony ships are heading to a new planet, but after generations in space have developed cold-war style hostilities. The captain of one of the ships kills half the cryo-preserved colonists and jettisons their weight so he doesn’t have to slow his ship as soon as the other three. Arriving several weeks before the rest, his colonists get all the best colony landing spots and dominate the planet. He is immediately captured and executed as a war criminal, but generations later people view him with mixed emotions—a bit of a monster, yet one who sacrificed himself in order that his people could win the planet.
If the likelihood of me needing a life-saving organ transplant at some point in my life is the same as for most other people, then I think I’d bite the bullet and agree to a system in which random healthy people are killed for their organs. Why? Because I’d have 5x the chance of being saved than being killed.
Except, of course, for the chance of being slain in the inevitable civil war that ensues. ;)
I remember a short story—title and author escape me—where this was actually much like what was going on. Everyone had their relevant types on file, and if some number of people needed an organ you could supply, you were harvested for said organs. The protagonist got notified that there were nearly enough people who needed his organs and he went undercover and visited them all, thinking he’d kill one and get out of it, but he finds that they aren’t what he expected (e.g. the one who needs a liver is a girl with hepatitis, not some drunk) and decides not to, and then one dies anyway and he’s off the hook.
Larry Niven wrote a number of short stories about organ transplants; in one of them, “The Jigsaw Man”, the primary source of organs for transplant is executions of criminals, which has led to more and more crimes being punishable by death. The main character of the story, who is currently in jail and awaiting trial, escapes through what amounts a stroke of luck, and finds out that the organ banks are right next to the jail. Certain that he is about to be recaptured and eventually executed, he decides to commit a crime worthy of the punishment he is going to receive: destroying a large number of the preserved organs. At the end of the story, he’s brought to trial only for the crime he originally committed: running red lights.
I’ve read that story, but it’s not the one I was thinking of in the grandparent.
I didn’t intend to suggest that “The Jigsaw Man” was the story in question.
That sounds like an interesting short story...I wish you remembered the title so I could go track it down.
In current practice, organ transplant recipients are typically old people who die shortly after receiving the transplant. The problem is still interesting; but you have to impose some artificial restrictions.
Sure, it’s just a thought experiment, like trolley problems. I’ve seen it used in arguments against consequentialism/utilitarianism, but I’m not sure how many of utilitarians bite this bullet (I guess it depends what type of consequentialist/utilitarian you are).
I noticed this as well. Pushing the fat man seemed obvious to me, and I wondered why everyone made such a fuss about it until I saw this dilemma.
Hypotheses: The hospital scenario involves a lot more decisions, so it seems as though there’s more rule breaking.
You want a hard rule that medical personnel won’t do injury to the people in their hands.
The trolley scenario evokes prejudice against fat people. It needs variants like redirecting another trolley that has many fewer people in it to block the first trolley, or perhaps knocking two football players (how?) into the trolley’s path.
Specifically: You don’t want future people to avoid seeking medical treatment — or to burn doctors at the stake — out of legitimate fear of being taken apart for their organs. Even if you tell the victims that it’s in the greater good for doctors to do that once in a while, the victims’ goals aren’t served by being sacrificed for the good of five strangers. The victims’ goals are much better served by going without a medical checkup, or possibly leading a mob to kill all the doctors.
There is a consequentialist reason to treat human beings as ends rather than means: If a human figures out that you intend to treat him or her as a means, this elicits a whole swath of evolved-in responses that will interfere with your intentions. These range from negotiation (“If you want to treat me as a means, I get to treat you as a means too”), to resistance (“I will stop you from doing that to me”), to outright violence (“I will stop you, so you don’t do that to anyone”).
Of course you can add factors to the thought experiment that even while following utilitarianism will make you decide not to harvest the traveler for his organs. But that’s also dodging the problem it tries to show—the problem that sometimes being strictly utilitarian leads to uncomfortable conclusions—that is, conclusions that conflict with the kind of ‘folk’, ‘knee-jerk’ morality we seem to have.
It depends what you consider the starting point for building the scenario. If you start by taking the story seriously as a real-world scenario, taking place in a real hospital with real people, then these are relevant considerations that would naturally arise as you were thinking through the problem, not additional factors that need to be added on. The work comes in removing factors to turn the scenario into an idealized thought experiment that boxes utilitarianism into one side, in opposition to our intuitive moral judgments. And if it’s necessary to make extensive or unrealistic stipulations in order to rule out seemingly important considerations, then that raises questions about how much we should be concerned about this thought experiment.
Sure. But what’s the point of putting “doctor” in the thought-experiment if it isn’t to arouse the particular associations that people have about doctors — one of which is the notion that doctors are trusted with unusual levels of access to other people’s bodies? It’s those associations that lead to people’s folk morality coming up with different answers to the “trolley” form and the “doctor” form of the problem.
A philosophical system of ethics that doesn’t add up to folk morality most of the time, over historical facts would be readily recognized as flawed, or even as not really ethics at all but something else. A system of ethics that added up to baby-eating, paperclipping, self-contradiction, or to the notion that it’s evil to have systems of ethics, for that matter — would not be the sort of ethics worth wanting.
Well, if a different thought experiment leads to different ‘folk-morality’-based conclusions while it doesn’t make a difference from a strictly utilitarian view point, that shows they are not fully compatible, or? Obviously, you can make them agree again by adding things, but that does not resolve the original problem.
For the success of an ethical system indeed it’s important to resonate with folk morality, but I also think the phenotype of everyday folk morality is a hodge-podge of biases and illusions. If we would take the basics of folk morality (what would that be, maybe… golden rule + utilitarianism?) I think something more consistent could be forged.
I have two responses, based on two different interpretations of your response:
I don’t see why a “strict utilitarian” would have to be a first-order utilitarian, i.e. one that only sees the immediate consequences of its actions and not the consequences of others’ responses to its actions. To avoid dealing with the social consequences (that is, extrapolated multi-actor consequences) of an act means to imagine it being performed in a moral vacuum: a place where nobody outside the imagined scene has any way of finding out what happened or responding to it. But breaking a rule when nobody is around to be injured by it or to report your rule-breaking to others, is a significantly different thought-experiment from breaking a rule under normal circumstances.
Humans (and, perhaps, AIs) are not designed to live in a world where the self is the only morally significant actor. They have to care about what other morally significant persons care about, and at more than a second-order level: they need to see not just smiling faces, but have the knowledge that there are minds behind those smiling faces.
And in any event, human cognition does not perform well in social or cognitive isolation: put a person in solitary confinement and you can predict that he or she will suffer and experience certain forms of cognitive breakdown; keep a scientific mind in isolation from a scientific community and you can expect that you will get a kook, not a genius.
Some people seem to treat the trolley problem and the doctor problem as the same problem stated in two different ways, in such a way as to expose a discrepancy in human moral reasoning. If so, this might be analogous to the Wason selection task, which exposes a discrepancy in human symbolic reasoning. Wason demonstrates that humans reason more effectively about applying social rules than about applying arithmetical rules isomorphic to those social rules. I’ve always imagined this as humans using a “social coprocessor” to evaluate rule-following, which is not engaged when thinking about an arithmetical problem.
Perhaps something similar is going on in the trolley problem and doctor problem: we are engaging a different “moral coprocessor” to think about one than the other. This difference may be captured by different schools of ethics: the doctor problem engages a “deontological coprocessor”, wherein facts such as a doctor’s social role and expected consequences of betrayal of duty are relevant. The trolley problem falls back to the “consequentialist coprocessor” for most readers, though, which computes: five dead is worse than one dead.
Perhaps the consequentialist coprocessor is a stupid, first-order utilitarian, whereas the deontological coprocessor deals with the limit case of others’ responses to our acts better.
Utilitarians can thereby extricate themselves from their prima facie conclusion that it’s right to kill the innocent man. However, the solution has the form: “We cannot do what is best utility-wise because others, who are not utilitarians, will respond in ways that damage utility to an even greater extent than we have increased it.”
However, this kind of solution doesn’t speak very well for utilitarianism, for consider an alternative: “We cannot do what is best paperclip-wise because others, who are not paperclippers, will respond in ways that tend to reduce the long term future paperclip-count.”
In fact, Clippy can ‘get the answer right’ on a surprisingly high proportion of moral questions if he is prepared to be circumspect, consider the long term, and keep in mind that no-one but him is maximizing paperclips.
But then this raises the question: Assuming we lived in a society of utilitarians, who feel no irrational fear at the thought of being harvested for the greater good, and no moral indignation when others are so harvested, would this practice be ‘right’? Would that entire society be ‘morally preferable’ to ours?
There is an alternate version where the man on the footbridge is wearing a heavy backpack, rather than being fat. That’s the scenario that Josh Greene & colleagues used in this paper, for instance.
Fat people are OK but you have a problem with football players?
There are an awful lot of people who are interested in decision problems who might just say “push ’em” as group-affiliation humor!
The overt reason for pushing a fat man is that it’s the way to only kill one person while mustering sufficient weight to stop the trolley. It seems plausible that what’s intended is a very fat man, or you could just specify large person.
Two football players seems like a way of specifying a substantial amount of weight while involving few people.
Two additional things are in play here: 1) As others said, there’s a breach of an implicit social contract, which explains some squeamishness 2) In this scenario, the “normal” person is the young traveler, he’s the one readers are likely to associate with.
I’d be inclined to bite the bullet too, i.e. I might prefer living in a society in which things like that happen, provided it really is better (i.e. it doesn’t just result in less people visiting doctors etc.).
But in this specific scenario, there would be a better solution: the doctor offers to draw lots among the patients to know which of them will is sacrificed to have his organs distributed among the remaining four; so the patients have a choice between agreeing to that (80% chances of survival) and certain death.
I like this idea. For the thought experiment at hand, though, it seems too convenient.
Suppose the dying patients’ organs are mutually incompatible with each other; only the young traveler’s organs will do. In that scenario, should the traveler’s organs be distributed?
There’s probably a least convenient possible world in which I’d bite the bullet and agree that it might be right for the doctor to kill the patient.
Suppose that on planets J and K, doctors are robots, and that it’s common knowledge that they are “friendly” consequentialists who take the actions that maximize the expected health of their patients (“friendly” in the sense that they are “good genies” whose utility function matches human morality, i.e. they don’t save the life of a patient that wants to die, don’t value “vegetables” as much, etc.).
But on planet J, robot doctors treat each patient in isolation, maximizing his expected health, whereas on planet K doctors maximize the expected health of their patients as a whole, even if that means killing one to save five others.
I would prefer to live on planet K than on planet J, because even if there’s a small probability p that I’ll have my organs harvested to save five other patients, there’s also a probability 5 * p that my life will be saved by a robot doctor’s cold utilitarian calculation.
Does this include putting less value on patients who would only live a short while longer (say, a year) with a transplant than without? AIUI this is typical of transplant patients.
Probably yes, which would mean that in many cases the sacrifice wouldn’t be made (though—least convenient possible world again—there are cases where it would).
I’m not. If I hadn’t heard about this or the trolly problem or equivalent I’d probably do it without thinking and then be surprised when people criticised the decision.