Then I have an “easy” patch: let the entity that does the spotting and killing be incorruptible and infallible. Like, an AI, an army of robots, or something. With that, I don’t see any obvious flaw beyond the fact that, with this level of technology, there are very probably better alternatives than transplantation. But the idea of creating a machine for the explicit purpose of killing people might be even more creepy than the police state we’re vaguely familiar with.
I suspect that the space of killer organ-transplanting AIs programmed by humans has far many more negative outcomes than positive ones. Even if we stipulate that the AI is incorruptible and infallible, there are still potential negative consequences:
People still have an incentive to sabotage their own health and to not get tested (see wedrifid’s argument)
People would live in fear of getting randomly killed, even if the chance was small
Allowing an AI to be built that can kill people might be a bad precedent
Allowing the practice of killing innocent people might be a bad precedent
You could get rid of the self-sabotage and fear if you make the AI a secret. But a world where people can make secret AIs to kill innocents without any kind of consent or vote still seems like a bad place.
But the idea of creating a machine for the explicit purpose of killing people might be even more creepy than the police state we’re vaguely familiar with.
Part of the reason it’s creepy is because, just like a police state, the outcome is probably going to be bad in the vast majority of cases.
(a) Let the aliens that happen to visit us cure cancer, except for 1 random patient out of 100, that they will let die, then eat.
This is an interesting case. My initial reaction was to let the aliens care cancer and eat 1⁄100 cancer patients (after they died). Yet as I thought about it more, and why people might find the scenario creepy, I became more and more worried.
In a one-shot negotiation, it would make sense on consequentialist grounds to accept their deal. The 1% of patients that the aliens eat won’t have a change in outcome: they would have died of cancer anyway. Yet, as usual with these thought experiments designed to test consequentialism, the answer changes when you consider the full range of possible consequences.
This question hinges on why the aliens want to eat humans. If the aliens had some good reason why they need to eat humans to cure them, then that might be OK (like ingesting cancerous humans will let them figure out how to stop all the different types of cancer). Yet there are many ways that allowing aliens to get their hands on human specimens might be a bad idea. Maybe they could clone us, or engineer pathogens.
Then there is the aspect of aliens eating human bodies. That’s creepy for a good reason. Letting aliens gain a taste for human flesh might be a bad idea. It might be a Schelling Point that we shouldn’t let them cross.
For a consequentialist to accept the aliens deal, there must be strong evidence for most of the following:
This is a one-shot deal
The aliens will leave after and never be seen again
Letting aliens access human specimens won’t have other negative consequences down the line
Letting aliens eat human bodies won’t tempt them to want to eat more human bodies, which could lead to other negative consequences
The aliens will follow through on their side of the deal
The aliens aren’t lying, wrong, or confused about anything, and we understand their psychology well enough to judge the above
The amount of evidence necessary to establish these conditions would be astronomical, and it would take probably decades of cooperation with these particular aliens.
Likewise, the burden of evidence for showing that killer doctors and killer doctorbot AIs provide a net benefit is also astronomically high. In the absence of that evidence being provided, it’s hard for a consequentialist to not get creeped out by the plausibility of negative outcomes.
If I was reading the thought experiment correctly, the aliens are only allowed to let people die and then eat them. So the aliens wouldn’t be causing any patients to die who wouldn’t have died anyway. If the aliens were allowed to eat humans before they died, then that would change the whole example and make consequentialists even more pessimistic.
I suspect that the space of killer organ-transplanting AIs programmed by humans has far many more negative outcomes than positive ones. Even if we stipulate that the AI is incorruptible and infallible, there are still potential negative consequences:
People still have an incentive to sabotage their own health and to not get tested (see wedrifid’s argument)
People would live in fear of getting randomly killed, even if the chance was small
Allowing an AI to be built that can kill people might be a bad precedent
Allowing the practice of killing innocent people might be a bad precedent
You could get rid of the self-sabotage and fear if you make the AI a secret. But a world where people can make secret AIs to kill innocents without any kind of consent or vote still seems like a bad place.
Part of the reason it’s creepy is because, just like a police state, the outcome is probably going to be bad in the vast majority of cases.
This is an interesting case. My initial reaction was to let the aliens care cancer and eat 1⁄100 cancer patients (after they died). Yet as I thought about it more, and why people might find the scenario creepy, I became more and more worried.
In a one-shot negotiation, it would make sense on consequentialist grounds to accept their deal. The 1% of patients that the aliens eat won’t have a change in outcome: they would have died of cancer anyway. Yet, as usual with these thought experiments designed to test consequentialism, the answer changes when you consider the full range of possible consequences.
This question hinges on why the aliens want to eat humans. If the aliens had some good reason why they need to eat humans to cure them, then that might be OK (like ingesting cancerous humans will let them figure out how to stop all the different types of cancer). Yet there are many ways that allowing aliens to get their hands on human specimens might be a bad idea. Maybe they could clone us, or engineer pathogens.
Then there is the aspect of aliens eating human bodies. That’s creepy for a good reason. Letting aliens gain a taste for human flesh might be a bad idea. It might be a Schelling Point that we shouldn’t let them cross.
For a consequentialist to accept the aliens deal, there must be strong evidence for most of the following:
This is a one-shot deal
The aliens will leave after and never be seen again
Letting aliens access human specimens won’t have other negative consequences down the line
Letting aliens eat human bodies won’t tempt them to want to eat more human bodies, which could lead to other negative consequences
The aliens will follow through on their side of the deal
The aliens aren’t lying, wrong, or confused about anything, and we understand their psychology well enough to judge the above
The amount of evidence necessary to establish these conditions would be astronomical, and it would take probably decades of cooperation with these particular aliens.
Likewise, the burden of evidence for showing that killer doctors and killer doctorbot AIs provide a net benefit is also astronomically high. In the absence of that evidence being provided, it’s hard for a consequentialist to not get creeped out by the plausibility of negative outcomes.
Plenty of people survive cancer. The specific cancer patients the aliens eat might have lived if not for the aliens.
If I was reading the thought experiment correctly, the aliens are only allowed to let people die and then eat them. So the aliens wouldn’t be causing any patients to die who wouldn’t have died anyway. If the aliens were allowed to eat humans before they died, then that would change the whole example and make consequentialists even more pessimistic.
It wasn’t specified that they died of cancer, but yeah, my misreading, thanks.