What if, instead of deciding whether the doctor murders the patient in secret when she comes to the hospital, we have to decide whether the government (perhaps armed with genetic screening results seized from a police databases and companies like 23andMe) passes a law allowing police to openly kill and confiscate organs from anyone whose organs could presumably save five or more transplant patients?
As far as I can tell, this would have no bad effects beyond the obvious one of killing the people involved—it wouldn’t make people less likely to go to hospitals or anything—but it keeps most of the creepiness of the original. Which makes me think although everything you say in this post is both true and important (and I’ve upvoted it) it doesn’t get to the heart of why most people are creeped out by the transplant example.
1) you have handed the government the ability to decide, at any point, to kill anyone they consider undesirable, provided they can find five compatible transplant recipients; this is a massive increase in their power, and a big step towards totalitarian society.
2) You are discouraging people from undergoing genetic screening
3) you are discouraging people from living healthily. If you are unhealthy, your organs are of less use to the government, and hence you are more likely to survive.
4) you are encouraging people to go off the grid; as people who are off the grid are less likely to be found for the purposes of harvesting.
Yes, these logical reasons are not directly the reason people are creeped out; but were you to find a less harmful scenario, you would also likely find the scenario less creepy.
For instance, most people would find it less creepy if the harvesting was limited only to those who are already in prison, on long (20 year+) sentences; and it also seems that that policy would have less indirect harms.
For instance, most people would find it less creepy if the harvesting was limited only to those who are already in prison, on long (20 year+) sentences; and it also seems that that policy would have less indirect harms.
There’s an incentive here to raise frequency and length of prison sentences, though. I think I actually saw a “death sentence for repeated jaywalking” scenario in some TV show, and IIRC it was actual caused by some medically-better-for-everyone “reason”, and it was pretty creepy.
As far as I can tell, this would have no bad effects beyond the obvious one of killing the people involved—it wouldn’t make people less likely to go to hospitals or anything
No, but it would make them afraid to go outside, or at least within the vicinity of police. This law might encourage people to walk around with weapons to deter police from nabbing them, and/or to fight back. People would be afraid to get genetic screening lest they make their organs a target. They would be afraid to go to police stations to report crimes lest they come out minus a kidney.
People with good organs would start bribing the police to defer their harvesting, and corruption would become rampant. Law and order would break down.
This sounds like an excellent plot for a science fiction movie about a dystopia, which indicates that it fails on consequentialist grounds unless our utility function is so warped that we are willing to create a police state to give organ transplants.
Not to mention an incentive to self mutilate. That is, to do damage to oneself such that the organs are no longer desirable but which leaves you better off than if you’d been harvested. Give yourself HIV for example.
Fifth reply: While in this case I don’t think that the policy is the right consequentialist thing to do, in general I expect consequentialism to endorse some decisions that violate our current commonsense morality. Such decisions are usually seen as moral progress in retrospect.
The probability of being killed in such a way would be tiny and wouldn’t significantly alter expected lifespan. However people are bad at intuitive risk evaluation and even if any person would at least twice more likely have their life saved than destroyed because of the policy, people would feel endangered and unhappy, which fact may overweigh the positive benefit. But if this concern didn’t apply (e.g. if most people learned to evaluate risks correctly on the intuitive level), I’d bite the bullet and vote for the policy.
By the way, upvoted for correct application of least convenient possible world technique.
Good point. My first objection is the same as prase’s, my second is that a government that depends on popular support shouldn’t enforce policies that creep out the citizens (either because they’d lose the next election in a landslide to a party with different values or decision theory, or because in a Parfit’s Hitchhiker way, if it were clear they would do such things then they’d have lost the previous election).
My third is that the creepiness here comes from our very real (and very understandable in consequentialist terms) fear of allowing the government too much power over citizens’ life and death. If instead you asked, should we make it legal for people to voluntarily off themselves when by so doing they can save several other lives, and should we make it an honorable thing to do, then I’m not nearly as disturbed by the idea. (There are myriad variations to try out, but generally whenever it gets creepy I can identify some sort of awful incentive structure being set up.)
I probably shouldn’t have said “government”. We get the same issues if the doctor just wanders around town, spots a genetically compatible person with his Super-Doctor-Vision, and kills them at their house. Or really through any means other than waiting for compatible donors to show up at the hospital.
This doesn’t fix the problem; it only changes the location. Giving your national medical association the power of citizens’ life and death is almost as bad as giving it to the government.
People won’t be afraid in hospitals, instead they’ll be afraid in their homes. They will have an incentive to try to hide from anyone who might be a doctor, or to kill them preemptively.
This policy would be a declaration of war between doctors and citizens. I can’t see the consequences going well.
Then I have an “easy” patch: let the entity that does the spotting and killing be incorruptible and infallible. Like, an AI, an army of robots, or something. With that, I don’t see any obvious flaw beyond the fact that, with this level of technology, there are very probably better alternatives than transplantation. But the idea of creating a machine for the explicit purpose of killing people might be even more creepy than the police state we’re vaguely familiar with.
Compare with the comment I saw somewhere with this dilemma:
(a) Let the aliens that happen to visit us cure cancer, except for 1 random patient out of 100, that they will let die, then eat.
(b) Just let the aliens go, never to be heard of again.
Then I have an “easy” patch: let the entity that does the spotting and killing be incorruptible and infallible. Like, an AI, an army of robots, or something. With that, I don’t see any obvious flaw beyond the fact that, with this level of technology, there are very probably better alternatives than transplantation. But the idea of creating a machine for the explicit purpose of killing people might be even more creepy than the police state we’re vaguely familiar with.
I suspect that the space of killer organ-transplanting AIs programmed by humans has far many more negative outcomes than positive ones. Even if we stipulate that the AI is incorruptible and infallible, there are still potential negative consequences:
People still have an incentive to sabotage their own health and to not get tested (see wedrifid’s argument)
People would live in fear of getting randomly killed, even if the chance was small
Allowing an AI to be built that can kill people might be a bad precedent
Allowing the practice of killing innocent people might be a bad precedent
You could get rid of the self-sabotage and fear if you make the AI a secret. But a world where people can make secret AIs to kill innocents without any kind of consent or vote still seems like a bad place.
But the idea of creating a machine for the explicit purpose of killing people might be even more creepy than the police state we’re vaguely familiar with.
Part of the reason it’s creepy is because, just like a police state, the outcome is probably going to be bad in the vast majority of cases.
(a) Let the aliens that happen to visit us cure cancer, except for 1 random patient out of 100, that they will let die, then eat.
This is an interesting case. My initial reaction was to let the aliens care cancer and eat 1⁄100 cancer patients (after they died). Yet as I thought about it more, and why people might find the scenario creepy, I became more and more worried.
In a one-shot negotiation, it would make sense on consequentialist grounds to accept their deal. The 1% of patients that the aliens eat won’t have a change in outcome: they would have died of cancer anyway. Yet, as usual with these thought experiments designed to test consequentialism, the answer changes when you consider the full range of possible consequences.
This question hinges on why the aliens want to eat humans. If the aliens had some good reason why they need to eat humans to cure them, then that might be OK (like ingesting cancerous humans will let them figure out how to stop all the different types of cancer). Yet there are many ways that allowing aliens to get their hands on human specimens might be a bad idea. Maybe they could clone us, or engineer pathogens.
Then there is the aspect of aliens eating human bodies. That’s creepy for a good reason. Letting aliens gain a taste for human flesh might be a bad idea. It might be a Schelling Point that we shouldn’t let them cross.
For a consequentialist to accept the aliens deal, there must be strong evidence for most of the following:
This is a one-shot deal
The aliens will leave after and never be seen again
Letting aliens access human specimens won’t have other negative consequences down the line
Letting aliens eat human bodies won’t tempt them to want to eat more human bodies, which could lead to other negative consequences
The aliens will follow through on their side of the deal
The aliens aren’t lying, wrong, or confused about anything, and we understand their psychology well enough to judge the above
The amount of evidence necessary to establish these conditions would be astronomical, and it would take probably decades of cooperation with these particular aliens.
Likewise, the burden of evidence for showing that killer doctors and killer doctorbot AIs provide a net benefit is also astronomically high. In the absence of that evidence being provided, it’s hard for a consequentialist to not get creeped out by the plausibility of negative outcomes.
If I was reading the thought experiment correctly, the aliens are only allowed to let people die and then eat them. So the aliens wouldn’t be causing any patients to die who wouldn’t have died anyway. If the aliens were allowed to eat humans before they died, then that would change the whole example and make consequentialists even more pessimistic.
Compare with the comment I saw somewhere with this dilemma:
(a) Let the aliens that happen to visit us cure cancer, except for 1 random patient out of 100, that they will let die, then eat.
(b) Just let the aliens go, never to be heard of again.
Let die? As in wait until they die by natural(ish) causes then eat the corpse? This sounds like a no-brainer. (Cryonicists might have some concerns about the meaning of death, however. Hopefully the patients agree that they are dead at the time!)
Just in case you humans decide on this method, I volunteer to be the incorruptable and infallible machine.
Oh, wait, I already did offer something like that, and you humans invented contorted rationalizations about how you wouldn’t trust me, because I’d just use that power to impregnate female humans or something.
I think the first hurdle is whether this a worthwhile policy at all. We’d need to weigh the expected cost in donor lives against the improvement in recipient lives; I would think that the factors which make for the best donors like youth and health would tend to militate against the policy given the profile of recipients, often older and already known to be unhealthy. Secondly, even if the program would be positive sum, we’d need to weigh it against alternatives (like forced donation at death) to ensure that it was actually the best policy possible.
Obviously, you could restate the hypothetical until all the factors which must be weighed demand Policy 145. But this is almost certainly a trivial exercise provided by the rule set governing consequentialism. At that point, however, I think there are a few responses available: (1) the world is not recognizable to me and I cannot, even with difficulty, really imagine what the balance of worlds close to it would be like; (2) while this world sounds great, I think I’m better off in this one and so I can safely say that I do not prefer it for myself; (3) the world is so different from the actual world that it is difficult to say whether such a world would be internally consistent if usefully similar to our own.
I think response (1) allows us to “bite the bullet” on paper, knowing it will never be fired; response (2) seems like it may usefully encapsulate our problems with the hypothetical and generate the correct response “good for them, then”; (3) this response allows a denial or agnosticism about the world and “out there” hypotheticals in general.
I think the proper response to this process is all three: I should agree were it so; I should properly recognize that I don’t like it (and don’t have to); and I can deny that the hypothetical reveals any important information about theory. I think these responses could be elided, though, simply by noting what was suggested earlier: given a static rule set and a fully malleable world, the generation of repugnant (or euphoric) results is trivial and thus unhelpful.
Why not just label the organ collection a “tax”, and say “Even if the tax burdens some people disproportionately, it helps people more than it harms people, and is therefore justified”?
I’d bet that if you look at the effects of ordinary taxes, and you count the benefits separately from the harms you’d find that statistically, the tax kills at least one person to help more than one person, just like the “organ tax”.
Of course, the organ tax vs. normal tax comparison is a comparison of seen versus unseen—you can’t tell who the people are who were killed by the taxes since they are a statistical increase in deaths with nobody getting their hands bloody—but I hope we’ve learned that seen vs. unseen is a bias.
The claim that ordinary taxation directly causes any deaths is actually a fairly bold one, whatever your opinion of them. Maybe I’m missing something. What leads you to believe that?
In progressive tax regimes it’s rather hard for people to literally be taxed into starvation, but that doesn’t mean that no deaths occur on the margins. Consider for example the case where a person needs expensive medical treatment that’s not covered by insurance, they (or their family) can’t afford it, but it’s close enough to their means that they would have been able to if it wasn’t for their taxes. Or consider a semi-skilled laborer that’s making enough money that their taxes are nontrivial, but not enough to support their family on base pay once taxes are factored in. In order to make ends meet they take a more dangerous position to collect hazard pay, and a year later they die in an industrial accident.
And so forth. Looking at the margins often means looking at unusual cases, but that doesn’t mean there aren’t any cases where the extra money would have made a difference. That’s not to say that dropping those taxes (and thus the stuff they fund) would necessarily be a utilitarian good, of course—only that there’s stuff we can put in the minus column, even if we’re just looking at deaths.
Ah, the hazardous profession case is one that I definitely hadn’t thought of. It’s possible that Jiro’s assertion is true for cases like that, but it’s also difficult to reason about, given that the hypothetical world in which said worker was not taxed may have a very different kind of economy as a result of this same change.
I can think of a hypoothetical person who has a 99.9% chance of living without the tax, and a 99.8% with it. And I can also think of there being more than 1000 such hypothetical people.
“Can afford to live without it but not with it” implies going all the way down to 0% chance. You don’t need to go down to an 0% chance for there statistically to be deaths.
But how does that work? What mechanism actually accounts for that difference? Is this hypothetical single person we could have individually exempted from taxes just barely unable to afford enough food, for example? I don’t yet buy the argument that any taxes I’m aware of impose enough of a financial burden on anyone to pose an existential risk, even a small one (Like a .1% difference in their survival odds). This is not entirely a random chance, since levels of taxation are generally calibrated to income, presumably at least partially for the purpose of specifically not endangering anyone’s ability to survive.
Also, while I realize that your entire premise here is that we’re counting the benefits and the harms separately, doing so isn’t particularly helpful in demonstrating that a normal tax burden is comparable to a random chance of being killed, since the whole point of taxation is that the collective benefits are cheaper when bought in bulk than if they had to be approximated on an individual level. While you may be in the camp of people who claim that citizenship in (insert specific state, or even states in general) is not a net benefit to a given individual’s viability, saying “any benefits don’t count” and then saying “it’s plausible that this tax burden is a minor existential risk to any given individual given that” is not particularly convincing.
There are all sorts of random possibilities that could reduce someone’s life expectancy by a tiny amount but which statistically over large numbers of people would result in more than one extra death. Imagine that someone has to work one extra hour per month and there’s a tiny chance of dying associated with it, or that they delay a visit to the doctor by one week, etc. Or all the other mechanisms which cause poorer people to have lower life expectancies (I highly doubt you can’t think of any), which mean that someone who gets marginally poorer by a tiny amount would on the average not live as long.
In Italy quite a few entrepreneurs have committed suicide since the time the tax rates were raised, which may or may not count depending on what you mean by “directly”.
I think a major part of how our instinctive morality works (and a reason humans, as a species, have been so successful) is that we don’t go for cheap solutions. The most moral thing is to save everyone. The solution here is a stopgap that just diminishes the urgency of technology to grow organ replacements, and even if short-term consequentially it leaves more people alive, it in fact worsens out long-term life expectancy by not addressing the problem (which is that people’s organs get damaged or wear out).
If a train is heading for 5 people, and you can press a switch to make it hit 1 person, the best moral decision is “I will find a way to save them all!” Even if you don’t find that solution, at least you were looking!
The solution here is a stopgap that just diminishes the urgency of technology to grow organ replacements, and even if short-term consequentially it leaves more people alive, it in fact worsens out long-term life expectancy by not addressing the problem (which is that people’s organs get damaged or wear out).
[parody mode]
Penicillin is a stopgap that just diminishes the urgency of technology to move people onto a non-organic substrate, and even if short-term consequentially it leaves more people alive, it in fact worsens out long-term life expectancy by not addressing the problem (which is that people live in an organic substrate vulnerable to outside influence)
[/parody mode]
Have you ever heard the saying “the perfect is the enemy of the good”? By insisting that only perfect solutions are worthwhile, you are arguing against any measure that doesn’t make humans immortal.
My point was meant in the sense that random culling for organs is not the best solution available to us. Organ growth is not that far in the future, and it’s held back primarily because of moral concerns. This is not analagous to your parody, which more closely resembles something like: “any action that does not work towards achieving immortality is wrong”.
The point is that people always try to find better solutions. If we lived in a world where, as a matter of fact, there is no way whatsoever to get organs for transplant victims except from living donors, then from a consequentialist standpoint some sort of random culling would in fact be the best solution. And I’m saying, that is not the world we live in.
“You stipulate that the only possible way to save five innocent lives is to murder one innocent person, and this murder will definitely save the five lives, and that these facts are known to me with effective certainty. But since I am running on corrupted hardware, I can’t occupy the epistemic state you want me to imagine. Therefore I reply that, in a society of Artificial Intelligences worthy of personhood and lacking any inbuilt tendency to be corrupted by power, it would be right for the AI to murder the one innocent person to save five, and moreover all its peers would agree. However, I refuse to extend this reply to myself, because the epistemic state you ask me to imagine, can only exist among other kinds of people than human beings.”
What if, instead of deciding whether the doctor murders the patient in secret when she comes to the hospital, we have to decide whether the government (perhaps armed with genetic screening results seized from a police databases and companies like 23andMe) passes a law allowing police to openly kill and confiscate organs from anyone whose organs could presumably save five or more transplant patients?
As far as I can tell, this would have no bad effects beyond the obvious one of killing the people involved—it wouldn’t make people less likely to go to hospitals or anything—but it keeps most of the creepiness of the original. Which makes me think although everything you say in this post is both true and important (and I’ve upvoted it) it doesn’t get to the heart of why most people are creeped out by the transplant example.
It would have quite a few bad knockon effects:
1) you have handed the government the ability to decide, at any point, to kill anyone they consider undesirable, provided they can find five compatible transplant recipients; this is a massive increase in their power, and a big step towards totalitarian society.
2) You are discouraging people from undergoing genetic screening
3) you are discouraging people from living healthily. If you are unhealthy, your organs are of less use to the government, and hence you are more likely to survive.
4) you are encouraging people to go off the grid; as people who are off the grid are less likely to be found for the purposes of harvesting.
Yes, these logical reasons are not directly the reason people are creeped out; but were you to find a less harmful scenario, you would also likely find the scenario less creepy.
For instance, most people would find it less creepy if the harvesting was limited only to those who are already in prison, on long (20 year+) sentences; and it also seems that that policy would have less indirect harms.
There’s an incentive here to raise frequency and length of prison sentences, though. I think I actually saw a “death sentence for repeated jaywalking” scenario in some TV show, and IIRC it was actual caused by some medically-better-for-everyone “reason”, and it was pretty creepy.
No, but it would make them afraid to go outside, or at least within the vicinity of police. This law might encourage people to walk around with weapons to deter police from nabbing them, and/or to fight back. People would be afraid to get genetic screening lest they make their organs a target. They would be afraid to go to police stations to report crimes lest they come out minus a kidney.
People with good organs would start bribing the police to defer their harvesting, and corruption would become rampant. Law and order would break down.
This sounds like an excellent plot for a science fiction movie about a dystopia, which indicates that it fails on consequentialist grounds unless our utility function is so warped that we are willing to create a police state to give organ transplants.
Not to mention an incentive to self mutilate. That is, to do damage to oneself such that the organs are no longer desirable but which leaves you better off than if you’d been harvested. Give yourself HIV for example.
Fourth reply: people deeply value autonomy.
Fifth reply: While in this case I don’t think that the policy is the right consequentialist thing to do, in general I expect consequentialism to endorse some decisions that violate our current commonsense morality. Such decisions are usually seen as moral progress in retrospect.
Upvoted because fourth reply seems much closer to a true objection.
The probability of being killed in such a way would be tiny and wouldn’t significantly alter expected lifespan. However people are bad at intuitive risk evaluation and even if any person would at least twice more likely have their life saved than destroyed because of the policy, people would feel endangered and unhappy, which fact may overweigh the positive benefit. But if this concern didn’t apply (e.g. if most people learned to evaluate risks correctly on the intuitive level), I’d bite the bullet and vote for the policy.
By the way, upvoted for correct application of least convenient possible world technique.
Good point. My first objection is the same as prase’s, my second is that a government that depends on popular support shouldn’t enforce policies that creep out the citizens (either because they’d lose the next election in a landslide to a party with different values or decision theory, or because in a Parfit’s Hitchhiker way, if it were clear they would do such things then they’d have lost the previous election).
My third is that the creepiness here comes from our very real (and very understandable in consequentialist terms) fear of allowing the government too much power over citizens’ life and death. If instead you asked, should we make it legal for people to voluntarily off themselves when by so doing they can save several other lives, and should we make it an honorable thing to do, then I’m not nearly as disturbed by the idea. (There are myriad variations to try out, but generally whenever it gets creepy I can identify some sort of awful incentive structure being set up.)
I probably shouldn’t have said “government”. We get the same issues if the doctor just wanders around town, spots a genetically compatible person with his Super-Doctor-Vision, and kills them at their house. Or really through any means other than waiting for compatible donors to show up at the hospital.
Your point five is well taken, though.
This doesn’t fix the problem; it only changes the location. Giving your national medical association the power of citizens’ life and death is almost as bad as giving it to the government.
People won’t be afraid in hospitals, instead they’ll be afraid in their homes. They will have an incentive to try to hide from anyone who might be a doctor, or to kill them preemptively.
This policy would be a declaration of war between doctors and citizens. I can’t see the consequences going well.
Then I have an “easy” patch: let the entity that does the spotting and killing be incorruptible and infallible. Like, an AI, an army of robots, or something. With that, I don’t see any obvious flaw beyond the fact that, with this level of technology, there are very probably better alternatives than transplantation. But the idea of creating a machine for the explicit purpose of killing people might be even more creepy than the police state we’re vaguely familiar with.
Compare with the comment I saw somewhere with this dilemma:
(a) Let the aliens that happen to visit us cure cancer, except for 1 random patient out of 100, that they will let die, then eat.
(b) Just let the aliens go, never to be heard of again.
I suspect that the space of killer organ-transplanting AIs programmed by humans has far many more negative outcomes than positive ones. Even if we stipulate that the AI is incorruptible and infallible, there are still potential negative consequences:
People still have an incentive to sabotage their own health and to not get tested (see wedrifid’s argument)
People would live in fear of getting randomly killed, even if the chance was small
Allowing an AI to be built that can kill people might be a bad precedent
Allowing the practice of killing innocent people might be a bad precedent
You could get rid of the self-sabotage and fear if you make the AI a secret. But a world where people can make secret AIs to kill innocents without any kind of consent or vote still seems like a bad place.
Part of the reason it’s creepy is because, just like a police state, the outcome is probably going to be bad in the vast majority of cases.
This is an interesting case. My initial reaction was to let the aliens care cancer and eat 1⁄100 cancer patients (after they died). Yet as I thought about it more, and why people might find the scenario creepy, I became more and more worried.
In a one-shot negotiation, it would make sense on consequentialist grounds to accept their deal. The 1% of patients that the aliens eat won’t have a change in outcome: they would have died of cancer anyway. Yet, as usual with these thought experiments designed to test consequentialism, the answer changes when you consider the full range of possible consequences.
This question hinges on why the aliens want to eat humans. If the aliens had some good reason why they need to eat humans to cure them, then that might be OK (like ingesting cancerous humans will let them figure out how to stop all the different types of cancer). Yet there are many ways that allowing aliens to get their hands on human specimens might be a bad idea. Maybe they could clone us, or engineer pathogens.
Then there is the aspect of aliens eating human bodies. That’s creepy for a good reason. Letting aliens gain a taste for human flesh might be a bad idea. It might be a Schelling Point that we shouldn’t let them cross.
For a consequentialist to accept the aliens deal, there must be strong evidence for most of the following:
This is a one-shot deal
The aliens will leave after and never be seen again
Letting aliens access human specimens won’t have other negative consequences down the line
Letting aliens eat human bodies won’t tempt them to want to eat more human bodies, which could lead to other negative consequences
The aliens will follow through on their side of the deal
The aliens aren’t lying, wrong, or confused about anything, and we understand their psychology well enough to judge the above
The amount of evidence necessary to establish these conditions would be astronomical, and it would take probably decades of cooperation with these particular aliens.
Likewise, the burden of evidence for showing that killer doctors and killer doctorbot AIs provide a net benefit is also astronomically high. In the absence of that evidence being provided, it’s hard for a consequentialist to not get creeped out by the plausibility of negative outcomes.
Plenty of people survive cancer. The specific cancer patients the aliens eat might have lived if not for the aliens.
If I was reading the thought experiment correctly, the aliens are only allowed to let people die and then eat them. So the aliens wouldn’t be causing any patients to die who wouldn’t have died anyway. If the aliens were allowed to eat humans before they died, then that would change the whole example and make consequentialists even more pessimistic.
It wasn’t specified that they died of cancer, but yeah, my misreading, thanks.
Let die? As in wait until they die by natural(ish) causes then eat the corpse? This sounds like a no-brainer. (Cryonicists might have some concerns about the meaning of death, however. Hopefully the patients agree that they are dead at the time!)
Just in case you humans decide on this method, I volunteer to be the incorruptable and infallible machine.
Oh, wait, I already did offer something like that, and you humans invented contorted rationalizations about how you wouldn’t trust me, because I’d just use that power to impregnate female humans or something.
I think the first hurdle is whether this a worthwhile policy at all. We’d need to weigh the expected cost in donor lives against the improvement in recipient lives; I would think that the factors which make for the best donors like youth and health would tend to militate against the policy given the profile of recipients, often older and already known to be unhealthy. Secondly, even if the program would be positive sum, we’d need to weigh it against alternatives (like forced donation at death) to ensure that it was actually the best policy possible.
Obviously, you could restate the hypothetical until all the factors which must be weighed demand Policy 145. But this is almost certainly a trivial exercise provided by the rule set governing consequentialism. At that point, however, I think there are a few responses available: (1) the world is not recognizable to me and I cannot, even with difficulty, really imagine what the balance of worlds close to it would be like; (2) while this world sounds great, I think I’m better off in this one and so I can safely say that I do not prefer it for myself; (3) the world is so different from the actual world that it is difficult to say whether such a world would be internally consistent if usefully similar to our own.
I think response (1) allows us to “bite the bullet” on paper, knowing it will never be fired; response (2) seems like it may usefully encapsulate our problems with the hypothetical and generate the correct response “good for them, then”; (3) this response allows a denial or agnosticism about the world and “out there” hypotheticals in general.
I think the proper response to this process is all three: I should agree were it so; I should properly recognize that I don’t like it (and don’t have to); and I can deny that the hypothetical reveals any important information about theory. I think these responses could be elided, though, simply by noting what was suggested earlier: given a static rule set and a fully malleable world, the generation of repugnant (or euphoric) results is trivial and thus unhelpful.
Why not just label the organ collection a “tax”, and say “Even if the tax burdens some people disproportionately, it helps people more than it harms people, and is therefore justified”?
The idea of taxation is usually that people get taxed equally. You can’t kill everyone 10%, you could only kill 10% a 100%.
Also, when you tax someone (especially someone with lots of money), they’re usually capable of living afterwards.
I’d bet that if you look at the effects of ordinary taxes, and you count the benefits separately from the harms you’d find that statistically, the tax kills at least one person to help more than one person, just like the “organ tax”.
Of course, the organ tax vs. normal tax comparison is a comparison of seen versus unseen—you can’t tell who the people are who were killed by the taxes since they are a statistical increase in deaths with nobody getting their hands bloody—but I hope we’ve learned that seen vs. unseen is a bias.
The claim that ordinary taxation directly causes any deaths is actually a fairly bold one, whatever your opinion of them. Maybe I’m missing something. What leads you to believe that?
In progressive tax regimes it’s rather hard for people to literally be taxed into starvation, but that doesn’t mean that no deaths occur on the margins. Consider for example the case where a person needs expensive medical treatment that’s not covered by insurance, they (or their family) can’t afford it, but it’s close enough to their means that they would have been able to if it wasn’t for their taxes. Or consider a semi-skilled laborer that’s making enough money that their taxes are nontrivial, but not enough to support their family on base pay once taxes are factored in. In order to make ends meet they take a more dangerous position to collect hazard pay, and a year later they die in an industrial accident.
And so forth. Looking at the margins often means looking at unusual cases, but that doesn’t mean there aren’t any cases where the extra money would have made a difference. That’s not to say that dropping those taxes (and thus the stuff they fund) would necessarily be a utilitarian good, of course—only that there’s stuff we can put in the minus column, even if we’re just looking at deaths.
Ah, the hazardous profession case is one that I definitely hadn’t thought of. It’s possible that Jiro’s assertion is true for cases like that, but it’s also difficult to reason about, given that the hypothetical world in which said worker was not taxed may have a very different kind of economy as a result of this same change.
I can think of a hypoothetical person who has a 99.9% chance of living without the tax, and a 99.8% with it. And I can also think of there being more than 1000 such hypothetical people.
“Can afford to live without it but not with it” implies going all the way down to 0% chance. You don’t need to go down to an 0% chance for there statistically to be deaths.
But how does that work? What mechanism actually accounts for that difference? Is this hypothetical single person we could have individually exempted from taxes just barely unable to afford enough food, for example? I don’t yet buy the argument that any taxes I’m aware of impose enough of a financial burden on anyone to pose an existential risk, even a small one (Like a .1% difference in their survival odds). This is not entirely a random chance, since levels of taxation are generally calibrated to income, presumably at least partially for the purpose of specifically not endangering anyone’s ability to survive.
Also, while I realize that your entire premise here is that we’re counting the benefits and the harms separately, doing so isn’t particularly helpful in demonstrating that a normal tax burden is comparable to a random chance of being killed, since the whole point of taxation is that the collective benefits are cheaper when bought in bulk than if they had to be approximated on an individual level. While you may be in the camp of people who claim that citizenship in (insert specific state, or even states in general) is not a net benefit to a given individual’s viability, saying “any benefits don’t count” and then saying “it’s plausible that this tax burden is a minor existential risk to any given individual given that” is not particularly convincing.
There are all sorts of random possibilities that could reduce someone’s life expectancy by a tiny amount but which statistically over large numbers of people would result in more than one extra death. Imagine that someone has to work one extra hour per month and there’s a tiny chance of dying associated with it, or that they delay a visit to the doctor by one week, etc. Or all the other mechanisms which cause poorer people to have lower life expectancies (I highly doubt you can’t think of any), which mean that someone who gets marginally poorer by a tiny amount would on the average not live as long.
In Italy quite a few entrepreneurs have committed suicide since the time the tax rates were raised, which may or may not count depending on what you mean by “directly”.
Would that eliminate much of the remaining creepiness from Yvain’s scenario? (It scarcely makes a difference to me, though others may differ.)
It does for me. “Tax” implies social acceptance in a way that “secret police” does not.
But people still die.
I think a major part of how our instinctive morality works (and a reason humans, as a species, have been so successful) is that we don’t go for cheap solutions. The most moral thing is to save everyone. The solution here is a stopgap that just diminishes the urgency of technology to grow organ replacements, and even if short-term consequentially it leaves more people alive, it in fact worsens out long-term life expectancy by not addressing the problem (which is that people’s organs get damaged or wear out).
If a train is heading for 5 people, and you can press a switch to make it hit 1 person, the best moral decision is “I will find a way to save them all!” Even if you don’t find that solution, at least you were looking!
[parody mode]
[/parody mode]
Have you ever heard the saying “the perfect is the enemy of the good”? By insisting that only perfect solutions are worthwhile, you are arguing against any measure that doesn’t make humans immortal.
My point was meant in the sense that random culling for organs is not the best solution available to us. Organ growth is not that far in the future, and it’s held back primarily because of moral concerns. This is not analagous to your parody, which more closely resembles something like: “any action that does not work towards achieving immortality is wrong”.
The point is that people always try to find better solutions. If we lived in a world where, as a matter of fact, there is no way whatsoever to get organs for transplant victims except from living donors, then from a consequentialist standpoint some sort of random culling would in fact be the best solution. And I’m saying, that is not the world we live in.
Related, here is Eliezer’s answer to the railroad switch dilemma from the ends don’t justify the means (among humans):
Why stop there? Why not say that the moral thing is to save even more people than are present, or will ever be born, etc.?