Two additional things are in play here:
1) As others said, there’s a breach of an implicit social contract, which explains some squeamishness
2) In this scenario, the “normal” person is the young traveler, he’s the one readers are likely to associate with.
I’d be inclined to bite the bullet too, i.e. I might prefer living in a society in which things like that happen, provided it really is better (i.e. it doesn’t just result in less people visiting doctors etc.).
But in this specific scenario, there would be a better solution: the doctor offers to draw lots among the patients to know which of them will is sacrificed to have his organs distributed among the remaining four; so the patients have a choice between agreeing to that (80% chances of survival) and certain death.
But in this specific scenario, there would be a better solution: the doctor offers to draw lots among the patients to know which of them will is sacrificed to have his organs distributed among the remaining four; so the patients have a choice between agreeing to that (80% chances of survival) and certain death.
I like this idea. For the thought experiment at hand, though, it seems too convenient.
Suppose the dying patients’ organs are mutually incompatible with each other; only the young traveler’s organs will do. In that scenario, should the traveler’s organs be distributed?
There’s probably a least convenient possible world in which I’d bite the bullet and agree that it might be right for the doctor to kill the patient.
Suppose that on planets J and K, doctors are robots, and that it’s common knowledge that they are “friendly” consequentialists who take the actions that maximize the expected health of their patients (“friendly” in the sense that they are “good genies” whose utility function matches human morality, i.e. they don’t save the life of a patient that wants to die, don’t value “vegetables” as much, etc.).
But on planet J, robot doctors treat each patient in isolation, maximizing his expected health, whereas on planet K doctors maximize the expected health of their patients as a whole, even if that means killing one to save five others.
I would prefer to live on planet K than on planet J, because even if there’s a small probability p that I’ll have my organs harvested to save five other patients, there’s also a probability 5 * p that my life will be saved by a robot doctor’s cold utilitarian calculation.
“friendly” in the sense that they are “good genies” whose utility function matches human morality, i.e. they don’t save the life of a patient that wants to die, don’t value “vegetables” as much, etc.
Does this include putting less value on patients who would only live a short while longer (say, a year) with a transplant than without? AIUI this is typical of transplant patients.
Probably yes, which would mean that in many cases the sacrifice wouldn’t be made (though—least convenient possible world again—there are cases where it would).
Two additional things are in play here: 1) As others said, there’s a breach of an implicit social contract, which explains some squeamishness 2) In this scenario, the “normal” person is the young traveler, he’s the one readers are likely to associate with.
I’d be inclined to bite the bullet too, i.e. I might prefer living in a society in which things like that happen, provided it really is better (i.e. it doesn’t just result in less people visiting doctors etc.).
But in this specific scenario, there would be a better solution: the doctor offers to draw lots among the patients to know which of them will is sacrificed to have his organs distributed among the remaining four; so the patients have a choice between agreeing to that (80% chances of survival) and certain death.
I like this idea. For the thought experiment at hand, though, it seems too convenient.
Suppose the dying patients’ organs are mutually incompatible with each other; only the young traveler’s organs will do. In that scenario, should the traveler’s organs be distributed?
There’s probably a least convenient possible world in which I’d bite the bullet and agree that it might be right for the doctor to kill the patient.
Suppose that on planets J and K, doctors are robots, and that it’s common knowledge that they are “friendly” consequentialists who take the actions that maximize the expected health of their patients (“friendly” in the sense that they are “good genies” whose utility function matches human morality, i.e. they don’t save the life of a patient that wants to die, don’t value “vegetables” as much, etc.).
But on planet J, robot doctors treat each patient in isolation, maximizing his expected health, whereas on planet K doctors maximize the expected health of their patients as a whole, even if that means killing one to save five others.
I would prefer to live on planet K than on planet J, because even if there’s a small probability p that I’ll have my organs harvested to save five other patients, there’s also a probability 5 * p that my life will be saved by a robot doctor’s cold utilitarian calculation.
Does this include putting less value on patients who would only live a short while longer (say, a year) with a transplant than without? AIUI this is typical of transplant patients.
Probably yes, which would mean that in many cases the sacrifice wouldn’t be made (though—least convenient possible world again—there are cases where it would).