If we ever pass up a chance to literally hold one child’s face to a fire and end malaria, we have screwed up.
In another comment James A. Donald suggests a way torturing children could actually help cure malaria:
To cure malaria, we really need to experiment on people. For some experiments, obtaining volunteers is likely to be difficult, and if one experimented on non volunteering adults, they would probably create very severe difficulties. Female children old enough to have competent immune systems, but no older, would be ideal.
Would you be willing to endorse this proposal? If not, why not?
If I’m not fighting the hypothetical, yes I would.
If I encountered someone claiming that in the messy real world, then I run the numbers VERY careful and most likely conclude the probability is infinitesimal of him actually telling the truth and being sane. Specifically, of those claims the one that it’d be easier to kidnap someone than to find volunteer (say, adult willing to do it in exchange for giving their families large sums of money) sounds highly implausible.
Specifically, of those claims the one that it’d be easier to kidnap someone than to find volunteer (say, adult willing to do it in exchange for giving their families large sums of money) sounds highly implausible.
What’s your opinion of doing it Tuskegee-style, rather than kidnapping them or getting volunteers? (One could believe that there might be a systematic difference between people who volunteer and the general population, for example.)
In general, given ethical norms as they currently exist, rather than in a hypothetical universe where everyone is a strict utilitarian, I think the expected returns on such an experiment are unlikely to be worth the reputational costs.
The Tuskegee experiment may have produced some useful data, but it certainly didn’t produce returns on the scale of reducing global syphilis incidence to zero. Likewise, even extensive experimentation on abducted children is unlikely to do so for malaria. The Tuskegee experiment though, is still seen as a black mark on the reputation of medical researchers and the government; I’ve encountered people who, having heard of it, genuinely believed that it, rather than the extremely stringent standards that currently exist for publishable studies, was a more accurate description of the behavior of present researchers. That sort of thing isn’t easy to escape.
Any effective utilitarian must account for the fact that we’re operating in a world which is extremely unforgiving of behavior such as cutting up a healthy hospital visitor to save several in need of organ transplants, and condition their behavior on that knowledge.
For example, Unit 731 proved that the best treatment for frostbite was not rubbing the Limb, which had been the traditional method but immersion in water a bit warmer than 100 degrees, but never mom than 122 degrees.
The cost of this scientific breakthrough was borne by those seized for medical experiments. They were taken outside and left with exposed arms, periodically drenched with water, until a guard decided that frostbite had set in. Testimony From a Japanese officer said this was determined after the “frozen arms, when struck with a short stick, emitted a sound resembling that which a board gives when it is struck.”
I don’t get the impression that those experiments destroyed a lot of trust—nothing compared to the rape of Nanking or Japanese treatment of American prisoners of war.
However, it might be worth noting that that sort of experimentation doesn’t seem to happen to people who are affiliated with the scientists or the government.
Logically, people could volunteer for such experiments and get the same respect that soldiers do, but I don’t know of any real-world examples.
I don’t get the impression that those experiments destroyed a lot of trust—nothing compared to the rape of Nanking or Japanese treatment of American prisoners of war.
It’s hard for experiments to destroy trust when those doing the experiments aren’t trusted anyway because they do other things that are as bad (and often on a larger scale).
Logically, people could volunteer for such experiments and get the same respect that soldiers do, but I don’t know of any real-world examples.
I was going to say that I didn’t think that medical researchers had ever solicited volunteers for experiments which are near certain to produce such traumatic effects, but on second thought, I do recall that some of the early research on the effects of decompression (as experienced by divers) was done by a scientist who solicited volunteers to be subjected to decompression sickness. I believe that some research on the effects of dramatic deceleration was also done similarly.
I have heard of someone who was trying to determine the biomechanics of crucifixion, what part of the forearm the nail goes through and whether suffocation is the actually the main cause of death and so on, who ran some initial tests with medical cadavers, and then with tied-up volunteers, some of whom were disappointed that they weren’t going to have actual nails driven through their wrists. Are extreme masochists under-represented on medical ethics boards?
Actual medical conspiracies, such as the Tuskegee syphilis experiment, probably contribute to public credence in medical conspiracy theories, such as anti-vax or HIV-AIDS denialism, which have a directly detrimental effect on public health.
In a culture of ideal rationalists, you might be better off having a government run lottery where people were randomly selected for participation in medical experiments, with participation on selection being mandatory for any experiment, whatever its effects on the participants, and all experiments being vetted only if their expected returns were more valuable than any negative effect (including loss of time) imposed on the participants. But we’re a species which is instinctively more afraid of sharks than stairs, so for human beings this probably isn’t a good recipe for social harmony.
Not directly, because I don’t think it would be likely to work. I do think that people should be educated in practical applications of utilitarianism (for instance, the importance of efficiency in charity,) but I don’t think that this would be likely to result in widespread approval of such practices.
In the specific case of the Tuskegee experiment, the methodology was not good, and given that treatments were already available, the expected return was not that great, so it’s not a very good example from which to generalize the potential value of studies which would be considered exploitative of the test subjects.
That already had a treatment, hence it was not going to save the millions suffering, since they were already saved. Also, those scientist didn’t have good enough methodology to have gotten anything useful out of it in either case. There’s a general air of incompetence surrounding the whole thing that worries me more than the morality.
As I said; before doing anything like this you have to run your numbers VERY carefully. The probability of any given study solving a disease on it’s own is extremely small, and there are all sorts of other practical problems. That’s the thing; utilitarianism is correct, and not answering according to it is fighting the hypothetical. but in cases like this perhaps you should fight the hypothetical, since you’re using specific historical examples that very clearly did NOT have positive utility and did NOT run the numbers.
It’s a fact that a specific type of utilitarianism is the only thing that makes sense if you know the math. It’s also a fact that there are many if’s and buts that make human non-utilitarian moral intuition an heuristic way more reliable for actually achieving the greatest utility than trying to run the numbers yourself in the vast majority of real world cases. Finally, it’s a fact that most things done in the name of ANY moral system is actually bullshit excuses.
Several African American health workers and educators associated with Tuskegee Institute helped the PHS to carry out its experimentation and played a critical role in its progression, though the extent to which they were aware of methodology of the study is not clear in all cases. Robert Russa Moton, the head of Tuskegee Institute at the time, and Eugene Dibble, of the Tuskegee Medical Hospital, both lent their endorsement and institutional resources to the government study. Nurse Eunice Rivers, an African-American trained at Tuskegee Institute who worked at its affiliated John Andrew Hospital, was recruited at the start of the study.
Vonderlehr was a strong advocate for Nurse Rivers’ participation, as she was the direct link to the community. During the Great Depression of the 1930s, the Tuskegee Study began by offering lower class African Americans, who often could not afford health care, the chance to join “Miss Rivers’ Lodge”. Patients were to receive free physical examinations at Tuskegee University, free rides to and from the clinic, hot meals on examination days, and free treatment for minor ailments.
Based on the available health care resources, Nurse Rivers believed that the benefits of the study to the men outweighed the risks.
What do you think of that utilitarian calculation? I’m not sure what I think of it.
It seems like either (1) Rivers was deceived, or (2) she was in some other way unaware that there was already an effective cure for syphilis which was not going to be given to the experimental subjects, or (3) the other options available to these people were so wretched that they were worse than having syphilis left untreated.
In cases 1 and 2, it doesn’t really matter what we think of her calculations; if you’re fed sufficiently wrong information then correct algorithms can lead you to terrible decisions. In case 3, maybe Rivers really didn’t have anything better to do—but only because other circumstances left the victims of this thing in an extraordinarily terrible position to begin with. (In much the same way as sawing off your own healthy left arm can be the best thing to do—if someone is pointing a gun at your head and will definitely kill you if you don’t. That doesn’t say much about the merits of self-amputation in less ridiculous situations.)
I find #3 very implausible, for what it’s worth.
(Now, if the statement were that Rivers believed that the benefits to the community outweighed the risks, and indeed the overt harm, to the subjects of the experiment, that would be more directly to the point. But that’s not what the article says.)
It seems like either (1) Rivers was deceived, or (2) she was in some other way unaware that there was already an effective cure for syphilis which was not going to be given to the experimental subjects, or (3) the other options available to these people were so wretched that they were worse than having syphilis left untreated.
Or (4), she was led to believe, either explicitly or implicitly, that her career and livelihood would be in jeopardy if she did not participate—thus motivating her to subconsciously sabotage her own utility calculations and then convince herself that the sabotaged calculations were valid.
In cases 1 and 2, it doesn’t really matter what we think of her calculations; if you’re fed sufficiently wrong information then correct algorithms can lead you to terrible decisions.
But that might still matter. It may be that utilitarianism produces the best results given no bad information, but something else, like “never permit experimentation without informed consent” would produce better results (on the average) in a world that contains bad information. Especially since whether the latter produces better results will depend on the frequency and nature of the bad information—the more the bad information encourages excess experimentation, the worse utilitarianism comes out in the comparison.
But a good utilitarian will certainly take into account the likelyhood of bad information and act appropriately. Hence the great utilitarian Mill’s advocacy of minimal interference in people’s lives in On Liberty, largely on the basis of the ways that ubiquitous bad information will make well-intentioned interference backfire often enough to make it a lower expected utility strategy in a very wide range of cases.
A competent utilitarian might be able to take into account the limitations of noisy information, maybe even in some way more useful than passivity. That’s not the same class of problem as information which has been deliberately and systematically corrupted by an actual conspiracy in order to lead the utilitarian decisionmaker to the conspiracy’s preferred conclusion.
The cure was discovered after the experiment had been going on for eight years, which complicates matters. At this point, I think her best strategy would have been to arrange for the men to find out about the cure in some way which can’t be traced back to her.
She may have believed that the men would have died more quickly of poverty if they hadn’t been part of the experiment.
What do you think of that utilitarian calculation?
Which one? The presumed altruistic one or the real-life one (which I think included the utilitly of having a job, the readiness to disobey authority, etc.)
Endorse? You mean, publicly, not on LessWrong, where doing so will get me much more than downvotes, and still have zero chance of making it actually happen? Of course not, but that has nothing to do with whether it’s a good idea.
If it will actually work, and there’s no significant (as in at least the size of malaria being cured faster), and bad, consequences we’re missing, or there are significant bad consequences but they’re balanced out by significant good consequences we’re missing, then yes.
Such as? Experimenting on animals? That will probably cause progress to be slower and think about all the people who would die from malaria in the meantime.
Yes. How many more? Would experimenting on little girls actually help that much? Also consider that many people consider a child’s life more valuable than an adult one, that even in a world where you would not have to kidnap girls and evade legal problems and deal with psychological costs on the scientists caring for little humans is significantly more expensive then caring for little mice, that said kidnapping, legal, and psychological costs do exist, that you could instead spend that money on mosquito nets and the like and save lives that way...
The answer is not obviously biased towards “experiment on little girls.”. In fact, I’d say it’s still biased towards “experiment on mice.” Morality isn’t like physics, the answer doesn’t always add up to normality, but a whole lot of the time it does.
Would experimenting on little girls actually help that much?
...
The answer is not obviously biased towards “experiment on little girls.”. In fact, I’d say it’s still biased towards “experiment on mice.”
So your answer is that in fact it would not work. That is a reasonable response to an outrageous hypothetical. Yet James A. Donald suggested a realistic scenario, and beside it, the arguments you come up with look rather weak.
Would experimenting on little girls actually help that much? Also consider that many people consider a child’s life more valuable than an adult one
Given the millions killed by malaria and at most thousands of experimental subjects, it takes a heavy thumb on the scales of this argument to make the utilitarian calculation come out against.
...evade legal problems and deal with psychological costs...
This is a get-out-of-utilitarianism-free card. A real utilitarian simply chooses the action of maximum utility. He would only pay a psychological cost for not doing that. When all are utilitarians the laws will also be utilitarian, and an evaluation of utility will be the sole criterion applied by the courts.
You are not a utilitarian. Neither is anyone else. This is why there would be psychological costs and why there are legal obstacles. You feel obliged to pretend to be a utilitarian, so you justify your non-utilitarian repugnance by putting it into the utilitarian scales.
caring for little humans is significantly more expensive then caring for little mice
But not any more expensive than caring for chimpanzees. Where, of course, “care for” does not mean “care for”, but means “keep sufficiently alive for experimental purposes”.
This looks like motivated reasoning. The motivation, to not torture little children, is admirable. But it is misapplied.
Morality isn’t like physics
Can you expand on what you see as the differences?
Would experimenting on little girls actually help that much?
...
No, seriously. I’ve read the original comment, James A. Donald does not support his claim.
But not any more expensive than caring for chimpanzees. Where, of course, “care for” does not mean “care for”, but means “keep sufficiently alive for experimental purposes”.
This is granted. References to small mice were silly and are now being replaced by “small chimpanzees.” However...
Given the millions killed by malaria and at most thousands of experimental subjects, it takes a heavy thumb on the scales of this argument to make the utilitarian calculation come out against.
This is not the calculation being made. Using your numbers, experimenting on little girls needs to be at least 1.001 times as effective as experimenting on chimpanzees or mice to be worthwhile (because then you save an extra thousand lives for your thousand girls sacrificed.) It’s not a flat “little girls versus millions of malaria deaths.”
This is, quite frankly, not clear to me, and I’d want to call in an actual medical researcher to clarify. Doubly so, with artificial human organs becoming more and more possible (such organs are obviously significantly cheaper than humans.)
This is a get-out-of-utilitarianism-free card. A real utilitarian simply chooses the action of maximum utility. He would only pay a psychological cost for not doing that. When all are utilitarians the laws will also be utilitarian, and an evaluation of utility will be the sole criterion applied by the courts.
Actually, I was interpreting the hypothetical as “utilitarian government in our world.” But fine, least convenient possible world and all that. That’s why I set the non-society costs aside from the rest.
You feel obliged to pretend to be a utilitarian, so you justify your non-utilitarian repugnance by putting it into the utilitarian scales.
This looks like motivated reasoning. The motivation, to not torture little children, is admirable. But it is misapplied.
Honestly, this is probably true—case in point, I would rather not write a similar post from the opposite side. That being said, looking through my arguments, most of them hinge on the implausibility of human experimentation really being all that more effective compared to chimpanzee and artificial organ experimentation.
Morality isn’t like physics
Can you expand on what you see as the differences?
The physics calculations around us have already been done perfectly. If, when we try to emulate them with our theories, we get something abnormal, it means our calculations are wrong and we need to either fix the calculation or the model. When we’ve done it all right, it should all add up to normality.
Our current morality, on the other hand, is a thing created over a few thousand years by society as a whole, that occasionally generates things like slavery. It is not guaranteed to already be perfectly calculated, and if our calculations turn out something abnormal, it could mean that either our calculations or the world is wrong.
This is not the calculation being made. Using your numbers, experimenting on little girls needs to be at least 1.001 times as effective as experimenting on chimpanzees or mice to be worthwhile (because then you save an extra thousand lives for your thousand girls sacrificed.) It’s not a flat “little girls versus millions of malaria deaths.”
Point taken.
This is, quite frankly, not clear to me, and I’d want to call in an actual medical researcher to clarify.
Well, yes. I doubt that JAD has particular expertise in malarial research, I don’t and neither do you. To know whether a malarial research programme would benefit scientifically from a supply of humans to experiment on with no more restraint than we use with chimpanzees, one would have to ask someone with that expertise. But I think the hypothesis prima facie plausible enough to conduct the hypothetical argument, in a way which merely saying “suppose you could save millions of lives by torturing some children” is not.
After all, all medical interventions intended for humans must at some point be tested on humans, or we don’t really know what they do in humans. At present, human testing is generally the last phase undertaken. That’s partly because humans are more expensive than test-tubes or mice. (I’m not sure how they compare with chimpanzees, given the prices that poor people in some parts of the world sell their children for.) But it is also partly because of the ethical problems of involving humans earlier.
That’s partly because humans are more expensive than test-tubes or mice. (I’m not sure how they compare with chimpanzees, given the prices that poor people in some parts of the world sell their children for.)
Note also that getting humans to experiment on by buying them from poor third world parents is generally frowned upon.
This is not the calculation being made. Using your numbers, experimenting on little girls needs to be at least 1.001 times as effective as experimenting on chimpanzees or mice to be worthwhile (because then you save an extra thousand lives for your thousand girls sacrificed.)
Well, given that more then 1 in 1000 drugs that look promising in animals fail human trials, I’d say that is a ridiculously low bar to pass.
Given the millions killed by malaria and at most thousands of experimental subjects, it takes a heavy thumb on the scales of this argument to make the utilitarian calculation come out against.
If it would result in a timely cure for malaria which would result in the disease’s global eradication or near-eradication, I would say that it would be worth kidnapping a few thousand children. But not only would a world where you could get away with doing so differ from our own in some very significant ways, I honestly doubt that a few thousand captive test subjects constitute a decisive and currently limiting factor in the progress of the research.
Of wait we’re talking about an entire society thats utilitarian and rational. In that case I’m (coordinating with everyone else via auman agreement) just dedicating the entire global population to a monstrous machine for maximally efficient FAI research where 99% of people are suffering beyond comprehension with no regard for their own well being in order to support a few elite researchers as the dedicate literally every second of their lives to thinking at maximal efficiency while pumped full of nootropics that’ll kill them in a few years.
Would you be willing to endorse this proposal? If not, why not?
This particular proposal? No.
But mainly because we already have the tech to effectively cure malaria; it’s called “DDT” and the only reason we aren’t using it now is a lack of political will to challenge the environmental movement. If we lived in the Donaldverse where this proposal could be taken seriously, it wouldn’t be hard to get a widespread mosquito eradication movement started; after all, sentimental concerns are the main reason we’re handicapping ourselves here in the first place.
In general though, I think human experimentation does have merits. So much of what we know about our biology, especially the biology of the brain, comes from examining the victims of rare mutations diseases or accidents which impaired the functioning of a specific chemical pathway or tissue. If we could do organized knockout studies there is a good chance that we could gain a lot of knowledge which otherwise might take decades to uncover. But like a lot of other interesting ideas, the Nazis kind of messed this one up for the rest of us; there’s really no chance of this sort of thing being allowed in the current political climate, so speculating about it is idle almost by definition.
DDT resistance in mosquitoes is rampant due to overuse
Current WHO regulations specify not using it where resistance is observed. Hardly the sort of regulation we have against DDT in the US (where malaria is not really a problem)
But like a lot of other interesting ideas, the Nazis kind of messed this one up for the rest of us
That is backwards. It is not because the Nazis did it that experimenting on non-consenting human subjects is considered repugnant. It is because it is repugnant, that the Nazis are condemned for doing it.
there’s really no chance of this sort of thing being allowed in the current political climate
Is that an expression of regret for lost possibilities? There is no chance of this sort of thing being allowed in any non-evil political climate.
There is no chance of this sort of thing being allowed in any non-evil political climate.
While that may be true, the catch may lie in finding “non-evil” political climate.
Here’s what has been happening in reality in the XXI century: …after 9/11, health professionals working with the military and intelligence services “designed and participated in cruel, inhumane and degrading treatment and torture of detainees”. Medical professionals were in effect told that their ethical mantra “first do no harm” did not apply, because they were not treating people who were ill. (Link)
It’s more a matter of what evil means. If it is allowed, that’s worth a good many points in the evil column of the report card.
It certainly has happened in climates, than which we know of more evil ones, but that case was enabled by the general lack of human regard for the class of people experimented on.
Yes, agreed. I’m not sure I know what “evil” means, but I’m fairly sympathetic to the view that, as the saying goes, good folk can allow evil to thrive by doing nothing.
In another comment James A. Donald suggests a way torturing children could actually help cure malaria:
Would you be willing to endorse this proposal? If not, why not?
If I’m not fighting the hypothetical, yes I would.
If I encountered someone claiming that in the messy real world, then I run the numbers VERY careful and most likely conclude the probability is infinitesimal of him actually telling the truth and being sane. Specifically, of those claims the one that it’d be easier to kidnap someone than to find volunteer (say, adult willing to do it in exchange for giving their families large sums of money) sounds highly implausible.
What’s your opinion of doing it Tuskegee-style, rather than kidnapping them or getting volunteers? (One could believe that there might be a systematic difference between people who volunteer and the general population, for example.)
In general, given ethical norms as they currently exist, rather than in a hypothetical universe where everyone is a strict utilitarian, I think the expected returns on such an experiment are unlikely to be worth the reputational costs.
The Tuskegee experiment may have produced some useful data, but it certainly didn’t produce returns on the scale of reducing global syphilis incidence to zero. Likewise, even extensive experimentation on abducted children is unlikely to do so for malaria. The Tuskegee experiment though, is still seen as a black mark on the reputation of medical researchers and the government; I’ve encountered people who, having heard of it, genuinely believed that it, rather than the extremely stringent standards that currently exist for publishable studies, was a more accurate description of the behavior of present researchers. That sort of thing isn’t easy to escape.
Any effective utilitarian must account for the fact that we’re operating in a world which is extremely unforgiving of behavior such as cutting up a healthy hospital visitor to save several in need of organ transplants, and condition their behavior on that knowledge.
Here’s one with actual information gained: Imperial Japanese experimentation about frostbite
The cost of this scientific breakthrough was borne by those seized for medical experiments. They were taken outside and left with exposed arms, periodically drenched with water, until a guard decided that frostbite had set in. Testimony From a Japanese officer said this was determined after the “frozen arms, when struck with a short stick, emitted a sound resembling that which a board gives when it is struck.”
I don’t get the impression that those experiments destroyed a lot of trust—nothing compared to the rape of Nanking or Japanese treatment of American prisoners of war.
However, it might be worth noting that that sort of experimentation doesn’t seem to happen to people who are affiliated with the scientists or the government.
Logically, people could volunteer for such experiments and get the same respect that soldiers do, but I don’t know of any real-world examples.
It’s hard for experiments to destroy trust when those doing the experiments aren’t trusted anyway because they do other things that are as bad (and often on a larger scale).
I was going to say that I didn’t think that medical researchers had ever solicited volunteers for experiments which are near certain to produce such traumatic effects, but on second thought, I do recall that some of the early research on the effects of decompression (as experienced by divers) was done by a scientist who solicited volunteers to be subjected to decompression sickness. I believe that some research on the effects of dramatic deceleration was also done similarly.
I have heard of someone who was trying to determine the biomechanics of crucifixion, what part of the forearm the nail goes through and whether suffocation is the actually the main cause of death and so on, who ran some initial tests with medical cadavers, and then with tied-up volunteers, some of whom were disappointed that they weren’t going to have actual nails driven through their wrists. Are extreme masochists under-represented on medical ethics boards?
Actual medical conspiracies, such as the Tuskegee syphilis experiment, probably contribute to public credence in medical conspiracy theories, such as anti-vax or HIV-AIDS denialism, which have a directly detrimental effect on public health.
Probably.
In a culture of ideal rationalists, you might be better off having a government run lottery where people were randomly selected for participation in medical experiments, with participation on selection being mandatory for any experiment, whatever its effects on the participants, and all experiments being vetted only if their expected returns were more valuable than any negative effect (including loss of time) imposed on the participants. But we’re a species which is instinctively more afraid of sharks than stairs, so for human beings this probably isn’t a good recipe for social harmony.
So would you be in favor of educating people why things like the Tuskegee experiment or human experimentation on abducted children are good things?
Not directly, because I don’t think it would be likely to work. I do think that people should be educated in practical applications of utilitarianism (for instance, the importance of efficiency in charity,) but I don’t think that this would be likely to result in widespread approval of such practices.
In the specific case of the Tuskegee experiment, the methodology was not good, and given that treatments were already available, the expected return was not that great, so it’s not a very good example from which to generalize the potential value of studies which would be considered exploitative of the test subjects.
That already had a treatment, hence it was not going to save the millions suffering, since they were already saved. Also, those scientist didn’t have good enough methodology to have gotten anything useful out of it in either case. There’s a general air of incompetence surrounding the whole thing that worries me more than the morality.
As I said; before doing anything like this you have to run your numbers VERY carefully. The probability of any given study solving a disease on it’s own is extremely small, and there are all sorts of other practical problems. That’s the thing; utilitarianism is correct, and not answering according to it is fighting the hypothetical. but in cases like this perhaps you should fight the hypothetical, since you’re using specific historical examples that very clearly did NOT have positive utility and did NOT run the numbers.
It’s a fact that a specific type of utilitarianism is the only thing that makes sense if you know the math. It’s also a fact that there are many if’s and buts that make human non-utilitarian moral intuition an heuristic way more reliable for actually achieving the greatest utility than trying to run the numbers yourself in the vast majority of real world cases. Finally, it’s a fact that most things done in the name of ANY moral system is actually bullshit excuses.
http://en.wikipedia.org/wiki/Tuskegee_syphilis_experiment
What do you think of that utilitarian calculation? I’m not sure what I think of it.
It seems like either (1) Rivers was deceived, or (2) she was in some other way unaware that there was already an effective cure for syphilis which was not going to be given to the experimental subjects, or (3) the other options available to these people were so wretched that they were worse than having syphilis left untreated.
In cases 1 and 2, it doesn’t really matter what we think of her calculations; if you’re fed sufficiently wrong information then correct algorithms can lead you to terrible decisions. In case 3, maybe Rivers really didn’t have anything better to do—but only because other circumstances left the victims of this thing in an extraordinarily terrible position to begin with. (In much the same way as sawing off your own healthy left arm can be the best thing to do—if someone is pointing a gun at your head and will definitely kill you if you don’t. That doesn’t say much about the merits of self-amputation in less ridiculous situations.)
I find #3 very implausible, for what it’s worth.
(Now, if the statement were that Rivers believed that the benefits to the community outweighed the risks, and indeed the overt harm, to the subjects of the experiment, that would be more directly to the point. But that’s not what the article says.)
Or (4), she was led to believe, either explicitly or implicitly, that her career and livelihood would be in jeopardy if she did not participate—thus motivating her to subconsciously sabotage her own utility calculations and then convince herself that the sabotaged calculations were valid.
But that might still matter. It may be that utilitarianism produces the best results given no bad information, but something else, like “never permit experimentation without informed consent” would produce better results (on the average) in a world that contains bad information. Especially since whether the latter produces better results will depend on the frequency and nature of the bad information—the more the bad information encourages excess experimentation, the worse utilitarianism comes out in the comparison.
But a good utilitarian will certainly take into account the likelyhood of bad information and act appropriately. Hence the great utilitarian Mill’s advocacy of minimal interference in people’s lives in On Liberty, largely on the basis of the ways that ubiquitous bad information will make well-intentioned interference backfire often enough to make it a lower expected utility strategy in a very wide range of cases.
A competent utilitarian might be able to take into account the limitations of noisy information, maybe even in some way more useful than passivity. That’s not the same class of problem as information which has been deliberately and systematically corrupted by an actual conspiracy in order to lead the utilitarian decisionmaker to the conspiracy’s preferred conclusion.
The cure was discovered after the experiment had been going on for eight years, which complicates matters. At this point, I think her best strategy would have been to arrange for the men to find out about the cure in some way which can’t be traced back to her.
She may have believed that the men would have died more quickly of poverty if they hadn’t been part of the experiment.
Which one? The presumed altruistic one or the real-life one (which I think included the utilitly of having a job, the readiness to disobey authority, etc.)
The altruistic one, mostly.
Endorse? You mean, publicly, not on LessWrong, where doing so will get me much more than downvotes, and still have zero chance of making it actually happen? Of course not, but that has nothing to do with whether it’s a good idea.
I meant “endorse” in the sense that, unlike the Milgram experiment, there is no authority figure to take responsibility on your behalf.
Do you think it’s a good idea?
If it will actually work, and there’s no significant (as in at least the size of malaria being cured faster), and bad, consequences we’re missing, or there are significant bad consequences but they’re balanced out by significant good consequences we’re missing, then yes.
The question is not “would this be a net benefit” (and it probably would, as much as I cringe from it). The question is, are there no better options?
Such as? Experimenting on animals? That will probably cause progress to be slower and think about all the people who would die from malaria in the meantime.
Yes. How many more? Would experimenting on little girls actually help that much? Also consider that many people consider a child’s life more valuable than an adult one, that even in a world where you would not have to kidnap girls and evade legal problems and deal with psychological costs on the scientists caring for little humans is significantly more expensive then caring for little mice, that said kidnapping, legal, and psychological costs do exist, that you could instead spend that money on mosquito nets and the like and save lives that way...
The answer is not obviously biased towards “experiment on little girls.”. In fact, I’d say it’s still biased towards “experiment on mice.” Morality isn’t like physics, the answer doesn’t always add up to normality, but a whole lot of the time it does.
...
So your answer is that in fact it would not work. That is a reasonable response to an outrageous hypothetical. Yet James A. Donald suggested a realistic scenario, and beside it, the arguments you come up with look rather weak.
Given the millions killed by malaria and at most thousands of experimental subjects, it takes a heavy thumb on the scales of this argument to make the utilitarian calculation come out against.
This is a get-out-of-utilitarianism-free card. A real utilitarian simply chooses the action of maximum utility. He would only pay a psychological cost for not doing that. When all are utilitarians the laws will also be utilitarian, and an evaluation of utility will be the sole criterion applied by the courts.
You are not a utilitarian. Neither is anyone else. This is why there would be psychological costs and why there are legal obstacles. You feel obliged to pretend to be a utilitarian, so you justify your non-utilitarian repugnance by putting it into the utilitarian scales.
But not any more expensive than caring for chimpanzees. Where, of course, “care for” does not mean “care for”, but means “keep sufficiently alive for experimental purposes”.
This looks like motivated reasoning. The motivation, to not torture little children, is admirable. But it is misapplied.
Can you expand on what you see as the differences?
No, seriously. I’ve read the original comment, James A. Donald does not support his claim.
This is granted. References to small mice were silly and are now being replaced by “small chimpanzees.” However...
This is not the calculation being made. Using your numbers, experimenting on little girls needs to be at least 1.001 times as effective as experimenting on chimpanzees or mice to be worthwhile (because then you save an extra thousand lives for your thousand girls sacrificed.) It’s not a flat “little girls versus millions of malaria deaths.”
This is, quite frankly, not clear to me, and I’d want to call in an actual medical researcher to clarify. Doubly so, with artificial human organs becoming more and more possible (such organs are obviously significantly cheaper than humans.)
Actually, I was interpreting the hypothetical as “utilitarian government in our world.” But fine, least convenient possible world and all that. That’s why I set the non-society costs aside from the rest.
Honestly, this is probably true—case in point, I would rather not write a similar post from the opposite side. That being said, looking through my arguments, most of them hinge on the implausibility of human experimentation really being all that more effective compared to chimpanzee and artificial organ experimentation.
The physics calculations around us have already been done perfectly. If, when we try to emulate them with our theories, we get something abnormal, it means our calculations are wrong and we need to either fix the calculation or the model. When we’ve done it all right, it should all add up to normality.
Our current morality, on the other hand, is a thing created over a few thousand years by society as a whole, that occasionally generates things like slavery. It is not guaranteed to already be perfectly calculated, and if our calculations turn out something abnormal, it could mean that either our calculations or the world is wrong.
Point taken.
Well, yes. I doubt that JAD has particular expertise in malarial research, I don’t and neither do you. To know whether a malarial research programme would benefit scientifically from a supply of humans to experiment on with no more restraint than we use with chimpanzees, one would have to ask someone with that expertise. But I think the hypothesis prima facie plausible enough to conduct the hypothetical argument, in a way which merely saying “suppose you could save millions of lives by torturing some children” is not.
After all, all medical interventions intended for humans must at some point be tested on humans, or we don’t really know what they do in humans. At present, human testing is generally the last phase undertaken. That’s partly because humans are more expensive than test-tubes or mice. (I’m not sure how they compare with chimpanzees, given the prices that poor people in some parts of the world sell their children for.) But it is also partly because of the ethical problems of involving humans earlier.
Note also that getting humans to experiment on by buying them from poor third world parents is generally frowned upon.
Well, given that more then 1 in 1000 drugs that look promising in animals fail human trials, I’d say that is a ridiculously low bar to pass.
How many drugs that look promising in one human trial fail to pass later human trials?
If it would result in a timely cure for malaria which would result in the disease’s global eradication or near-eradication, I would say that it would be worth kidnapping a few thousand children. But not only would a world where you could get away with doing so differ from our own in some very significant ways, I honestly doubt that a few thousand captive test subjects constitute a decisive and currently limiting factor in the progress of the research.
Of wait we’re talking about an entire society thats utilitarian and rational. In that case I’m (coordinating with everyone else via auman agreement) just dedicating the entire global population to a monstrous machine for maximally efficient FAI research where 99% of people are suffering beyond comprehension with no regard for their own well being in order to support a few elite researchers as the dedicate literally every second of their lives to thinking at maximal efficiency while pumped full of nootropics that’ll kill them in a few years.
This particular proposal? No.
But mainly because we already have the tech to effectively cure malaria; it’s called “DDT” and the only reason we aren’t using it now is a lack of political will to challenge the environmental movement. If we lived in the Donaldverse where this proposal could be taken seriously, it wouldn’t be hard to get a widespread mosquito eradication movement started; after all, sentimental concerns are the main reason we’re handicapping ourselves here in the first place.
In general though, I think human experimentation does have merits. So much of what we know about our biology, especially the biology of the brain, comes from examining the victims of rare mutations diseases or accidents which impaired the functioning of a specific chemical pathway or tissue. If we could do organized knockout studies there is a good chance that we could gain a lot of knowledge which otherwise might take decades to uncover. But like a lot of other interesting ideas, the Nazis kind of messed this one up for the rest of us; there’s really no chance of this sort of thing being allowed in the current political climate, so speculating about it is idle almost by definition.
DDT is widely used in the third world right now
DDT resistance in mosquitoes is rampant due to overuse
Current WHO regulations specify not using it where resistance is observed. Hardly the sort of regulation we have against DDT in the US (where malaria is not really a problem)
That is backwards. It is not because the Nazis did it that experimenting on non-consenting human subjects is considered repugnant. It is because it is repugnant, that the Nazis are condemned for doing it.
Is that an expression of regret for lost possibilities? There is no chance of this sort of thing being allowed in any non-evil political climate.
While that may be true, the catch may lie in finding “non-evil” political climate.
Here’s what has been happening in reality in the XXI century: …after 9/11, health professionals working with the military and intelligence services “designed and participated in cruel, inhumane and degrading treatment and torture of detainees”. Medical professionals were in effect told that their ethical mantra “first do no harm” did not apply, because they were not treating people who were ill. (Link)
Would it were that this were so.
It’s more a matter of what evil means. If it is allowed, that’s worth a good many points in the evil column of the report card.
It certainly has happened in climates, than which we know of more evil ones, but that case was enabled by the general lack of human regard for the class of people experimented on.
Yes, agreed. I’m not sure I know what “evil” means, but I’m fairly sympathetic to the view that, as the saying goes, good folk can allow evil to thrive by doing nothing.