I thought the same thing and went to dig up the original. Here it is:
One common illustration is called Transplant. Imagine that each of five patients in a hospital will die without an organ transplant. The patient in Room 1 needs a heart, the patient in Room 2 needs a liver, the patient in Room 3 needs a kidney, and so on. The person in Room 6 is in the hospital for routine tests. Luckily (for them, not for him!), his tissue is compatible with the other five patients, and a specialist is available to transplant his organs into the other five. This operation would save their lives, while killing the “donor”. There is no other way to save any of the other five patients (Foot 1966, Thomson 1976; compare related cases in Carritt 1947 and McCloskey 1965).
This is from the consequentialism page on the SEP, and it goes on to discuss modifications of utilitarianism that avoid biting the bullet (scalpel?) here.
This situation seems different for me for two reason:
Off-topic way: Killing the “donor” is bad for similar reasons as 2-boxing the Newcomb problem is bad. If doctors killed random patients then patients wouldn’t go to hospitals and medicine would collapse. IMO the supposedly utilitarian answer to the transplant problem is not really utilitarian.
On-topic way: The surgeons transplant organs to save lives, not to make babies. Saving lives and making lives seem very different to me, but I’m not sure why (or if) they differ from a utilitarian perspective.
Analogically, “killing a less happy person and conceiving a more happy one” may be wrong in a long term, by changing a society into one where people feel unsafe.
If doctors killed random patients then patients wouldn’t go to hospitals and medicine would collapse.
You’re fixating on the unimportant parts.
Let me change the scenario slightly to fix your collapse-of-medicine problem: Once in a while the government consults its random number generator and selects one or more, as needed, people to be cut up for organs. The government is careful to keep the benefits (in lives or QALYs or whatever) higher than the costs. Any problems here?
That people are stupefyingly irrational about risks, especially in regards to medicine.
As an example; my paternal grandmother died of a treatable cancer less than a year before I was born, out of a fear of doctors which she had picked up from post-war propaganda about the T4 euthenasia program. Now this is a woman who was otherwise as healthy as they come, living in America decades after the fact, refusing to go in for treatment because she was worried some oncologist was going to declare a full-blooded German immigrant as genetically impure and kill her to improve the Aryan race.
Now granted that’s a rather extreme case, and she wasn’t exactly stable on a good day from what I hear, but the point is that whatever bits of crazy we have get amplified completely out of proportion when medicine comes into it. People already get scared out of seeking treatment over rumors of mythical death panels or autism-causing vaccine programs, so you can only imagine how nutty they would get over even a small risk of actual government-sanctioned murder in hospitals.
(Not to mention that there are quite a lot of people with a perfectly legitimate reason to believe those RNGs might “just happen” to come up in their cases if they went in for treatment; it’s not like American bureaucrats have never abused their power to target political enemies before.)
The traditional objection to this sort of thing is that it creates perverse incentives: the government, or whichever body is managing our bystander/trolley tracks interface, benefits in the short term (smoother operations, can claim more people saved) if it interprets its numbers to maximize the number of warm bodies it has to work with, and the people in the parts pool benefit from the opposite. At minimum we’d expect that to introduce a certain amount of friction. In the worst case we could imagine it leading to a self-reinforcing establishment that firmly believes it’s being duly careful even when independent data says otherwise: consider how the American War on Drugs has played out.
And of course, I wouldn’t trust a government made of mere humans with such a determination, because power corrupts humans. A friendly artificial intelligence on the other hand...
Edited away an explanation so as not to take the last word
Any problems here?
Short answer, no.
I’d like to keep this thread focused to making a life vs. saving a life, not arguments about utilitarianism in general. I realize there is much more to be said on this subject, but I propose we end discussion here.
I thought the same thing and went to dig up the original. Here it is:
This is from the consequentialism page on the SEP, and it goes on to discuss modifications of utilitarianism that avoid biting the bullet (scalpel?) here.
This situation seems different for me for two reason:
Off-topic way: Killing the “donor” is bad for similar reasons as 2-boxing the Newcomb problem is bad. If doctors killed random patients then patients wouldn’t go to hospitals and medicine would collapse. IMO the supposedly utilitarian answer to the transplant problem is not really utilitarian.
On-topic way: The surgeons transplant organs to save lives, not to make babies. Saving lives and making lives seem very different to me, but I’m not sure why (or if) they differ from a utilitarian perspective.
Analogically, “killing a less happy person and conceiving a more happy one” may be wrong in a long term, by changing a society into one where people feel unsafe.
You’re fixating on the unimportant parts.
Let me change the scenario slightly to fix your collapse-of-medicine problem: Once in a while the government consults its random number generator and selects one or more, as needed, people to be cut up for organs. The government is careful to keep the benefits (in lives or QALYs or whatever) higher than the costs. Any problems here?
That people are stupefyingly irrational about risks, especially in regards to medicine.
As an example; my paternal grandmother died of a treatable cancer less than a year before I was born, out of a fear of doctors which she had picked up from post-war propaganda about the T4 euthenasia program. Now this is a woman who was otherwise as healthy as they come, living in America decades after the fact, refusing to go in for treatment because she was worried some oncologist was going to declare a full-blooded German immigrant as genetically impure and kill her to improve the Aryan race.
Now granted that’s a rather extreme case, and she wasn’t exactly stable on a good day from what I hear, but the point is that whatever bits of crazy we have get amplified completely out of proportion when medicine comes into it. People already get scared out of seeking treatment over rumors of mythical death panels or autism-causing vaccine programs, so you can only imagine how nutty they would get over even a small risk of actual government-sanctioned murder in hospitals.
(Not to mention that there are quite a lot of people with a perfectly legitimate reason to believe those RNGs might “just happen” to come up in their cases if they went in for treatment; it’s not like American bureaucrats have never abused their power to target political enemies before.)
The traditional objection to this sort of thing is that it creates perverse incentives: the government, or whichever body is managing our bystander/trolley tracks interface, benefits in the short term (smoother operations, can claim more people saved) if it interprets its numbers to maximize the number of warm bodies it has to work with, and the people in the parts pool benefit from the opposite. At minimum we’d expect that to introduce a certain amount of friction. In the worst case we could imagine it leading to a self-reinforcing establishment that firmly believes it’s being duly careful even when independent data says otherwise: consider how the American War on Drugs has played out.
That’s a very weak objection given that the real world is full or perverse incentives and still manages to function, more or less, sorta-kinda...
Only if the Q in QALY takes into account the fact that people will be constantly worried they might be picked by the RNG.
And of course, I wouldn’t trust a government made of mere humans with such a determination, because power corrupts humans. A friendly artificial intelligence on the other hand...
Edited away an explanation so as not to take the last word
Short answer, no.
I’d like to keep this thread focused to making a life vs. saving a life, not arguments about utilitarianism in general. I realize there is much more to be said on this subject, but I propose we end discussion here.