Specifically, of those claims the one that it’d be easier to kidnap someone than to find volunteer (say, adult willing to do it in exchange for giving their families large sums of money) sounds highly implausible.
What’s your opinion of doing it Tuskegee-style, rather than kidnapping them or getting volunteers? (One could believe that there might be a systematic difference between people who volunteer and the general population, for example.)
In general, given ethical norms as they currently exist, rather than in a hypothetical universe where everyone is a strict utilitarian, I think the expected returns on such an experiment are unlikely to be worth the reputational costs.
The Tuskegee experiment may have produced some useful data, but it certainly didn’t produce returns on the scale of reducing global syphilis incidence to zero. Likewise, even extensive experimentation on abducted children is unlikely to do so for malaria. The Tuskegee experiment though, is still seen as a black mark on the reputation of medical researchers and the government; I’ve encountered people who, having heard of it, genuinely believed that it, rather than the extremely stringent standards that currently exist for publishable studies, was a more accurate description of the behavior of present researchers. That sort of thing isn’t easy to escape.
Any effective utilitarian must account for the fact that we’re operating in a world which is extremely unforgiving of behavior such as cutting up a healthy hospital visitor to save several in need of organ transplants, and condition their behavior on that knowledge.
For example, Unit 731 proved that the best treatment for frostbite was not rubbing the Limb, which had been the traditional method but immersion in water a bit warmer than 100 degrees, but never mom than 122 degrees.
The cost of this scientific breakthrough was borne by those seized for medical experiments. They were taken outside and left with exposed arms, periodically drenched with water, until a guard decided that frostbite had set in. Testimony From a Japanese officer said this was determined after the “frozen arms, when struck with a short stick, emitted a sound resembling that which a board gives when it is struck.”
I don’t get the impression that those experiments destroyed a lot of trust—nothing compared to the rape of Nanking or Japanese treatment of American prisoners of war.
However, it might be worth noting that that sort of experimentation doesn’t seem to happen to people who are affiliated with the scientists or the government.
Logically, people could volunteer for such experiments and get the same respect that soldiers do, but I don’t know of any real-world examples.
I don’t get the impression that those experiments destroyed a lot of trust—nothing compared to the rape of Nanking or Japanese treatment of American prisoners of war.
It’s hard for experiments to destroy trust when those doing the experiments aren’t trusted anyway because they do other things that are as bad (and often on a larger scale).
Logically, people could volunteer for such experiments and get the same respect that soldiers do, but I don’t know of any real-world examples.
I was going to say that I didn’t think that medical researchers had ever solicited volunteers for experiments which are near certain to produce such traumatic effects, but on second thought, I do recall that some of the early research on the effects of decompression (as experienced by divers) was done by a scientist who solicited volunteers to be subjected to decompression sickness. I believe that some research on the effects of dramatic deceleration was also done similarly.
I have heard of someone who was trying to determine the biomechanics of crucifixion, what part of the forearm the nail goes through and whether suffocation is the actually the main cause of death and so on, who ran some initial tests with medical cadavers, and then with tied-up volunteers, some of whom were disappointed that they weren’t going to have actual nails driven through their wrists. Are extreme masochists under-represented on medical ethics boards?
Actual medical conspiracies, such as the Tuskegee syphilis experiment, probably contribute to public credence in medical conspiracy theories, such as anti-vax or HIV-AIDS denialism, which have a directly detrimental effect on public health.
In a culture of ideal rationalists, you might be better off having a government run lottery where people were randomly selected for participation in medical experiments, with participation on selection being mandatory for any experiment, whatever its effects on the participants, and all experiments being vetted only if their expected returns were more valuable than any negative effect (including loss of time) imposed on the participants. But we’re a species which is instinctively more afraid of sharks than stairs, so for human beings this probably isn’t a good recipe for social harmony.
Not directly, because I don’t think it would be likely to work. I do think that people should be educated in practical applications of utilitarianism (for instance, the importance of efficiency in charity,) but I don’t think that this would be likely to result in widespread approval of such practices.
In the specific case of the Tuskegee experiment, the methodology was not good, and given that treatments were already available, the expected return was not that great, so it’s not a very good example from which to generalize the potential value of studies which would be considered exploitative of the test subjects.
That already had a treatment, hence it was not going to save the millions suffering, since they were already saved. Also, those scientist didn’t have good enough methodology to have gotten anything useful out of it in either case. There’s a general air of incompetence surrounding the whole thing that worries me more than the morality.
As I said; before doing anything like this you have to run your numbers VERY carefully. The probability of any given study solving a disease on it’s own is extremely small, and there are all sorts of other practical problems. That’s the thing; utilitarianism is correct, and not answering according to it is fighting the hypothetical. but in cases like this perhaps you should fight the hypothetical, since you’re using specific historical examples that very clearly did NOT have positive utility and did NOT run the numbers.
It’s a fact that a specific type of utilitarianism is the only thing that makes sense if you know the math. It’s also a fact that there are many if’s and buts that make human non-utilitarian moral intuition an heuristic way more reliable for actually achieving the greatest utility than trying to run the numbers yourself in the vast majority of real world cases. Finally, it’s a fact that most things done in the name of ANY moral system is actually bullshit excuses.
Several African American health workers and educators associated with Tuskegee Institute helped the PHS to carry out its experimentation and played a critical role in its progression, though the extent to which they were aware of methodology of the study is not clear in all cases. Robert Russa Moton, the head of Tuskegee Institute at the time, and Eugene Dibble, of the Tuskegee Medical Hospital, both lent their endorsement and institutional resources to the government study. Nurse Eunice Rivers, an African-American trained at Tuskegee Institute who worked at its affiliated John Andrew Hospital, was recruited at the start of the study.
Vonderlehr was a strong advocate for Nurse Rivers’ participation, as she was the direct link to the community. During the Great Depression of the 1930s, the Tuskegee Study began by offering lower class African Americans, who often could not afford health care, the chance to join “Miss Rivers’ Lodge”. Patients were to receive free physical examinations at Tuskegee University, free rides to and from the clinic, hot meals on examination days, and free treatment for minor ailments.
Based on the available health care resources, Nurse Rivers believed that the benefits of the study to the men outweighed the risks.
What do you think of that utilitarian calculation? I’m not sure what I think of it.
It seems like either (1) Rivers was deceived, or (2) she was in some other way unaware that there was already an effective cure for syphilis which was not going to be given to the experimental subjects, or (3) the other options available to these people were so wretched that they were worse than having syphilis left untreated.
In cases 1 and 2, it doesn’t really matter what we think of her calculations; if you’re fed sufficiently wrong information then correct algorithms can lead you to terrible decisions. In case 3, maybe Rivers really didn’t have anything better to do—but only because other circumstances left the victims of this thing in an extraordinarily terrible position to begin with. (In much the same way as sawing off your own healthy left arm can be the best thing to do—if someone is pointing a gun at your head and will definitely kill you if you don’t. That doesn’t say much about the merits of self-amputation in less ridiculous situations.)
I find #3 very implausible, for what it’s worth.
(Now, if the statement were that Rivers believed that the benefits to the community outweighed the risks, and indeed the overt harm, to the subjects of the experiment, that would be more directly to the point. But that’s not what the article says.)
It seems like either (1) Rivers was deceived, or (2) she was in some other way unaware that there was already an effective cure for syphilis which was not going to be given to the experimental subjects, or (3) the other options available to these people were so wretched that they were worse than having syphilis left untreated.
Or (4), she was led to believe, either explicitly or implicitly, that her career and livelihood would be in jeopardy if she did not participate—thus motivating her to subconsciously sabotage her own utility calculations and then convince herself that the sabotaged calculations were valid.
In cases 1 and 2, it doesn’t really matter what we think of her calculations; if you’re fed sufficiently wrong information then correct algorithms can lead you to terrible decisions.
But that might still matter. It may be that utilitarianism produces the best results given no bad information, but something else, like “never permit experimentation without informed consent” would produce better results (on the average) in a world that contains bad information. Especially since whether the latter produces better results will depend on the frequency and nature of the bad information—the more the bad information encourages excess experimentation, the worse utilitarianism comes out in the comparison.
But a good utilitarian will certainly take into account the likelyhood of bad information and act appropriately. Hence the great utilitarian Mill’s advocacy of minimal interference in people’s lives in On Liberty, largely on the basis of the ways that ubiquitous bad information will make well-intentioned interference backfire often enough to make it a lower expected utility strategy in a very wide range of cases.
A competent utilitarian might be able to take into account the limitations of noisy information, maybe even in some way more useful than passivity. That’s not the same class of problem as information which has been deliberately and systematically corrupted by an actual conspiracy in order to lead the utilitarian decisionmaker to the conspiracy’s preferred conclusion.
The cure was discovered after the experiment had been going on for eight years, which complicates matters. At this point, I think her best strategy would have been to arrange for the men to find out about the cure in some way which can’t be traced back to her.
She may have believed that the men would have died more quickly of poverty if they hadn’t been part of the experiment.
What do you think of that utilitarian calculation?
Which one? The presumed altruistic one or the real-life one (which I think included the utilitly of having a job, the readiness to disobey authority, etc.)
What’s your opinion of doing it Tuskegee-style, rather than kidnapping them or getting volunteers? (One could believe that there might be a systematic difference between people who volunteer and the general population, for example.)
In general, given ethical norms as they currently exist, rather than in a hypothetical universe where everyone is a strict utilitarian, I think the expected returns on such an experiment are unlikely to be worth the reputational costs.
The Tuskegee experiment may have produced some useful data, but it certainly didn’t produce returns on the scale of reducing global syphilis incidence to zero. Likewise, even extensive experimentation on abducted children is unlikely to do so for malaria. The Tuskegee experiment though, is still seen as a black mark on the reputation of medical researchers and the government; I’ve encountered people who, having heard of it, genuinely believed that it, rather than the extremely stringent standards that currently exist for publishable studies, was a more accurate description of the behavior of present researchers. That sort of thing isn’t easy to escape.
Any effective utilitarian must account for the fact that we’re operating in a world which is extremely unforgiving of behavior such as cutting up a healthy hospital visitor to save several in need of organ transplants, and condition their behavior on that knowledge.
Here’s one with actual information gained: Imperial Japanese experimentation about frostbite
The cost of this scientific breakthrough was borne by those seized for medical experiments. They were taken outside and left with exposed arms, periodically drenched with water, until a guard decided that frostbite had set in. Testimony From a Japanese officer said this was determined after the “frozen arms, when struck with a short stick, emitted a sound resembling that which a board gives when it is struck.”
I don’t get the impression that those experiments destroyed a lot of trust—nothing compared to the rape of Nanking or Japanese treatment of American prisoners of war.
However, it might be worth noting that that sort of experimentation doesn’t seem to happen to people who are affiliated with the scientists or the government.
Logically, people could volunteer for such experiments and get the same respect that soldiers do, but I don’t know of any real-world examples.
It’s hard for experiments to destroy trust when those doing the experiments aren’t trusted anyway because they do other things that are as bad (and often on a larger scale).
I was going to say that I didn’t think that medical researchers had ever solicited volunteers for experiments which are near certain to produce such traumatic effects, but on second thought, I do recall that some of the early research on the effects of decompression (as experienced by divers) was done by a scientist who solicited volunteers to be subjected to decompression sickness. I believe that some research on the effects of dramatic deceleration was also done similarly.
I have heard of someone who was trying to determine the biomechanics of crucifixion, what part of the forearm the nail goes through and whether suffocation is the actually the main cause of death and so on, who ran some initial tests with medical cadavers, and then with tied-up volunteers, some of whom were disappointed that they weren’t going to have actual nails driven through their wrists. Are extreme masochists under-represented on medical ethics boards?
Actual medical conspiracies, such as the Tuskegee syphilis experiment, probably contribute to public credence in medical conspiracy theories, such as anti-vax or HIV-AIDS denialism, which have a directly detrimental effect on public health.
Probably.
In a culture of ideal rationalists, you might be better off having a government run lottery where people were randomly selected for participation in medical experiments, with participation on selection being mandatory for any experiment, whatever its effects on the participants, and all experiments being vetted only if their expected returns were more valuable than any negative effect (including loss of time) imposed on the participants. But we’re a species which is instinctively more afraid of sharks than stairs, so for human beings this probably isn’t a good recipe for social harmony.
So would you be in favor of educating people why things like the Tuskegee experiment or human experimentation on abducted children are good things?
Not directly, because I don’t think it would be likely to work. I do think that people should be educated in practical applications of utilitarianism (for instance, the importance of efficiency in charity,) but I don’t think that this would be likely to result in widespread approval of such practices.
In the specific case of the Tuskegee experiment, the methodology was not good, and given that treatments were already available, the expected return was not that great, so it’s not a very good example from which to generalize the potential value of studies which would be considered exploitative of the test subjects.
That already had a treatment, hence it was not going to save the millions suffering, since they were already saved. Also, those scientist didn’t have good enough methodology to have gotten anything useful out of it in either case. There’s a general air of incompetence surrounding the whole thing that worries me more than the morality.
As I said; before doing anything like this you have to run your numbers VERY carefully. The probability of any given study solving a disease on it’s own is extremely small, and there are all sorts of other practical problems. That’s the thing; utilitarianism is correct, and not answering according to it is fighting the hypothetical. but in cases like this perhaps you should fight the hypothetical, since you’re using specific historical examples that very clearly did NOT have positive utility and did NOT run the numbers.
It’s a fact that a specific type of utilitarianism is the only thing that makes sense if you know the math. It’s also a fact that there are many if’s and buts that make human non-utilitarian moral intuition an heuristic way more reliable for actually achieving the greatest utility than trying to run the numbers yourself in the vast majority of real world cases. Finally, it’s a fact that most things done in the name of ANY moral system is actually bullshit excuses.
http://en.wikipedia.org/wiki/Tuskegee_syphilis_experiment
What do you think of that utilitarian calculation? I’m not sure what I think of it.
It seems like either (1) Rivers was deceived, or (2) she was in some other way unaware that there was already an effective cure for syphilis which was not going to be given to the experimental subjects, or (3) the other options available to these people were so wretched that they were worse than having syphilis left untreated.
In cases 1 and 2, it doesn’t really matter what we think of her calculations; if you’re fed sufficiently wrong information then correct algorithms can lead you to terrible decisions. In case 3, maybe Rivers really didn’t have anything better to do—but only because other circumstances left the victims of this thing in an extraordinarily terrible position to begin with. (In much the same way as sawing off your own healthy left arm can be the best thing to do—if someone is pointing a gun at your head and will definitely kill you if you don’t. That doesn’t say much about the merits of self-amputation in less ridiculous situations.)
I find #3 very implausible, for what it’s worth.
(Now, if the statement were that Rivers believed that the benefits to the community outweighed the risks, and indeed the overt harm, to the subjects of the experiment, that would be more directly to the point. But that’s not what the article says.)
Or (4), she was led to believe, either explicitly or implicitly, that her career and livelihood would be in jeopardy if she did not participate—thus motivating her to subconsciously sabotage her own utility calculations and then convince herself that the sabotaged calculations were valid.
But that might still matter. It may be that utilitarianism produces the best results given no bad information, but something else, like “never permit experimentation without informed consent” would produce better results (on the average) in a world that contains bad information. Especially since whether the latter produces better results will depend on the frequency and nature of the bad information—the more the bad information encourages excess experimentation, the worse utilitarianism comes out in the comparison.
But a good utilitarian will certainly take into account the likelyhood of bad information and act appropriately. Hence the great utilitarian Mill’s advocacy of minimal interference in people’s lives in On Liberty, largely on the basis of the ways that ubiquitous bad information will make well-intentioned interference backfire often enough to make it a lower expected utility strategy in a very wide range of cases.
A competent utilitarian might be able to take into account the limitations of noisy information, maybe even in some way more useful than passivity. That’s not the same class of problem as information which has been deliberately and systematically corrupted by an actual conspiracy in order to lead the utilitarian decisionmaker to the conspiracy’s preferred conclusion.
The cure was discovered after the experiment had been going on for eight years, which complicates matters. At this point, I think her best strategy would have been to arrange for the men to find out about the cure in some way which can’t be traced back to her.
She may have believed that the men would have died more quickly of poverty if they hadn’t been part of the experiment.
Which one? The presumed altruistic one or the real-life one (which I think included the utilitly of having a job, the readiness to disobey authority, etc.)
The altruistic one, mostly.