That already had a treatment, hence it was not going to save the millions suffering, since they were already saved. Also, those scientist didn’t have good enough methodology to have gotten anything useful out of it in either case. There’s a general air of incompetence surrounding the whole thing that worries me more than the morality.
As I said; before doing anything like this you have to run your numbers VERY carefully. The probability of any given study solving a disease on it’s own is extremely small, and there are all sorts of other practical problems. That’s the thing; utilitarianism is correct, and not answering according to it is fighting the hypothetical. but in cases like this perhaps you should fight the hypothetical, since you’re using specific historical examples that very clearly did NOT have positive utility and did NOT run the numbers.
It’s a fact that a specific type of utilitarianism is the only thing that makes sense if you know the math. It’s also a fact that there are many if’s and buts that make human non-utilitarian moral intuition an heuristic way more reliable for actually achieving the greatest utility than trying to run the numbers yourself in the vast majority of real world cases. Finally, it’s a fact that most things done in the name of ANY moral system is actually bullshit excuses.
Several African American health workers and educators associated with Tuskegee Institute helped the PHS to carry out its experimentation and played a critical role in its progression, though the extent to which they were aware of methodology of the study is not clear in all cases. Robert Russa Moton, the head of Tuskegee Institute at the time, and Eugene Dibble, of the Tuskegee Medical Hospital, both lent their endorsement and institutional resources to the government study. Nurse Eunice Rivers, an African-American trained at Tuskegee Institute who worked at its affiliated John Andrew Hospital, was recruited at the start of the study.
Vonderlehr was a strong advocate for Nurse Rivers’ participation, as she was the direct link to the community. During the Great Depression of the 1930s, the Tuskegee Study began by offering lower class African Americans, who often could not afford health care, the chance to join “Miss Rivers’ Lodge”. Patients were to receive free physical examinations at Tuskegee University, free rides to and from the clinic, hot meals on examination days, and free treatment for minor ailments.
Based on the available health care resources, Nurse Rivers believed that the benefits of the study to the men outweighed the risks.
What do you think of that utilitarian calculation? I’m not sure what I think of it.
It seems like either (1) Rivers was deceived, or (2) she was in some other way unaware that there was already an effective cure for syphilis which was not going to be given to the experimental subjects, or (3) the other options available to these people were so wretched that they were worse than having syphilis left untreated.
In cases 1 and 2, it doesn’t really matter what we think of her calculations; if you’re fed sufficiently wrong information then correct algorithms can lead you to terrible decisions. In case 3, maybe Rivers really didn’t have anything better to do—but only because other circumstances left the victims of this thing in an extraordinarily terrible position to begin with. (In much the same way as sawing off your own healthy left arm can be the best thing to do—if someone is pointing a gun at your head and will definitely kill you if you don’t. That doesn’t say much about the merits of self-amputation in less ridiculous situations.)
I find #3 very implausible, for what it’s worth.
(Now, if the statement were that Rivers believed that the benefits to the community outweighed the risks, and indeed the overt harm, to the subjects of the experiment, that would be more directly to the point. But that’s not what the article says.)
It seems like either (1) Rivers was deceived, or (2) she was in some other way unaware that there was already an effective cure for syphilis which was not going to be given to the experimental subjects, or (3) the other options available to these people were so wretched that they were worse than having syphilis left untreated.
Or (4), she was led to believe, either explicitly or implicitly, that her career and livelihood would be in jeopardy if she did not participate—thus motivating her to subconsciously sabotage her own utility calculations and then convince herself that the sabotaged calculations were valid.
In cases 1 and 2, it doesn’t really matter what we think of her calculations; if you’re fed sufficiently wrong information then correct algorithms can lead you to terrible decisions.
But that might still matter. It may be that utilitarianism produces the best results given no bad information, but something else, like “never permit experimentation without informed consent” would produce better results (on the average) in a world that contains bad information. Especially since whether the latter produces better results will depend on the frequency and nature of the bad information—the more the bad information encourages excess experimentation, the worse utilitarianism comes out in the comparison.
But a good utilitarian will certainly take into account the likelyhood of bad information and act appropriately. Hence the great utilitarian Mill’s advocacy of minimal interference in people’s lives in On Liberty, largely on the basis of the ways that ubiquitous bad information will make well-intentioned interference backfire often enough to make it a lower expected utility strategy in a very wide range of cases.
A competent utilitarian might be able to take into account the limitations of noisy information, maybe even in some way more useful than passivity. That’s not the same class of problem as information which has been deliberately and systematically corrupted by an actual conspiracy in order to lead the utilitarian decisionmaker to the conspiracy’s preferred conclusion.
The cure was discovered after the experiment had been going on for eight years, which complicates matters. At this point, I think her best strategy would have been to arrange for the men to find out about the cure in some way which can’t be traced back to her.
She may have believed that the men would have died more quickly of poverty if they hadn’t been part of the experiment.
What do you think of that utilitarian calculation?
Which one? The presumed altruistic one or the real-life one (which I think included the utilitly of having a job, the readiness to disobey authority, etc.)
That already had a treatment, hence it was not going to save the millions suffering, since they were already saved. Also, those scientist didn’t have good enough methodology to have gotten anything useful out of it in either case. There’s a general air of incompetence surrounding the whole thing that worries me more than the morality.
As I said; before doing anything like this you have to run your numbers VERY carefully. The probability of any given study solving a disease on it’s own is extremely small, and there are all sorts of other practical problems. That’s the thing; utilitarianism is correct, and not answering according to it is fighting the hypothetical. but in cases like this perhaps you should fight the hypothetical, since you’re using specific historical examples that very clearly did NOT have positive utility and did NOT run the numbers.
It’s a fact that a specific type of utilitarianism is the only thing that makes sense if you know the math. It’s also a fact that there are many if’s and buts that make human non-utilitarian moral intuition an heuristic way more reliable for actually achieving the greatest utility than trying to run the numbers yourself in the vast majority of real world cases. Finally, it’s a fact that most things done in the name of ANY moral system is actually bullshit excuses.
http://en.wikipedia.org/wiki/Tuskegee_syphilis_experiment
What do you think of that utilitarian calculation? I’m not sure what I think of it.
It seems like either (1) Rivers was deceived, or (2) she was in some other way unaware that there was already an effective cure for syphilis which was not going to be given to the experimental subjects, or (3) the other options available to these people were so wretched that they were worse than having syphilis left untreated.
In cases 1 and 2, it doesn’t really matter what we think of her calculations; if you’re fed sufficiently wrong information then correct algorithms can lead you to terrible decisions. In case 3, maybe Rivers really didn’t have anything better to do—but only because other circumstances left the victims of this thing in an extraordinarily terrible position to begin with. (In much the same way as sawing off your own healthy left arm can be the best thing to do—if someone is pointing a gun at your head and will definitely kill you if you don’t. That doesn’t say much about the merits of self-amputation in less ridiculous situations.)
I find #3 very implausible, for what it’s worth.
(Now, if the statement were that Rivers believed that the benefits to the community outweighed the risks, and indeed the overt harm, to the subjects of the experiment, that would be more directly to the point. But that’s not what the article says.)
Or (4), she was led to believe, either explicitly or implicitly, that her career and livelihood would be in jeopardy if she did not participate—thus motivating her to subconsciously sabotage her own utility calculations and then convince herself that the sabotaged calculations were valid.
But that might still matter. It may be that utilitarianism produces the best results given no bad information, but something else, like “never permit experimentation without informed consent” would produce better results (on the average) in a world that contains bad information. Especially since whether the latter produces better results will depend on the frequency and nature of the bad information—the more the bad information encourages excess experimentation, the worse utilitarianism comes out in the comparison.
But a good utilitarian will certainly take into account the likelyhood of bad information and act appropriately. Hence the great utilitarian Mill’s advocacy of minimal interference in people’s lives in On Liberty, largely on the basis of the ways that ubiquitous bad information will make well-intentioned interference backfire often enough to make it a lower expected utility strategy in a very wide range of cases.
A competent utilitarian might be able to take into account the limitations of noisy information, maybe even in some way more useful than passivity. That’s not the same class of problem as information which has been deliberately and systematically corrupted by an actual conspiracy in order to lead the utilitarian decisionmaker to the conspiracy’s preferred conclusion.
The cure was discovered after the experiment had been going on for eight years, which complicates matters. At this point, I think her best strategy would have been to arrange for the men to find out about the cure in some way which can’t be traced back to her.
She may have believed that the men would have died more quickly of poverty if they hadn’t been part of the experiment.
Which one? The presumed altruistic one or the real-life one (which I think included the utilitly of having a job, the readiness to disobey authority, etc.)
The altruistic one, mostly.