It seems like either (1) Rivers was deceived, or (2) she was in some other way unaware that there was already an effective cure for syphilis which was not going to be given to the experimental subjects, or (3) the other options available to these people were so wretched that they were worse than having syphilis left untreated.
In cases 1 and 2, it doesn’t really matter what we think of her calculations; if you’re fed sufficiently wrong information then correct algorithms can lead you to terrible decisions. In case 3, maybe Rivers really didn’t have anything better to do—but only because other circumstances left the victims of this thing in an extraordinarily terrible position to begin with. (In much the same way as sawing off your own healthy left arm can be the best thing to do—if someone is pointing a gun at your head and will definitely kill you if you don’t. That doesn’t say much about the merits of self-amputation in less ridiculous situations.)
I find #3 very implausible, for what it’s worth.
(Now, if the statement were that Rivers believed that the benefits to the community outweighed the risks, and indeed the overt harm, to the subjects of the experiment, that would be more directly to the point. But that’s not what the article says.)
It seems like either (1) Rivers was deceived, or (2) she was in some other way unaware that there was already an effective cure for syphilis which was not going to be given to the experimental subjects, or (3) the other options available to these people were so wretched that they were worse than having syphilis left untreated.
Or (4), she was led to believe, either explicitly or implicitly, that her career and livelihood would be in jeopardy if she did not participate—thus motivating her to subconsciously sabotage her own utility calculations and then convince herself that the sabotaged calculations were valid.
In cases 1 and 2, it doesn’t really matter what we think of her calculations; if you’re fed sufficiently wrong information then correct algorithms can lead you to terrible decisions.
But that might still matter. It may be that utilitarianism produces the best results given no bad information, but something else, like “never permit experimentation without informed consent” would produce better results (on the average) in a world that contains bad information. Especially since whether the latter produces better results will depend on the frequency and nature of the bad information—the more the bad information encourages excess experimentation, the worse utilitarianism comes out in the comparison.
But a good utilitarian will certainly take into account the likelyhood of bad information and act appropriately. Hence the great utilitarian Mill’s advocacy of minimal interference in people’s lives in On Liberty, largely on the basis of the ways that ubiquitous bad information will make well-intentioned interference backfire often enough to make it a lower expected utility strategy in a very wide range of cases.
A competent utilitarian might be able to take into account the limitations of noisy information, maybe even in some way more useful than passivity. That’s not the same class of problem as information which has been deliberately and systematically corrupted by an actual conspiracy in order to lead the utilitarian decisionmaker to the conspiracy’s preferred conclusion.
The cure was discovered after the experiment had been going on for eight years, which complicates matters. At this point, I think her best strategy would have been to arrange for the men to find out about the cure in some way which can’t be traced back to her.
She may have believed that the men would have died more quickly of poverty if they hadn’t been part of the experiment.
It seems like either (1) Rivers was deceived, or (2) she was in some other way unaware that there was already an effective cure for syphilis which was not going to be given to the experimental subjects, or (3) the other options available to these people were so wretched that they were worse than having syphilis left untreated.
In cases 1 and 2, it doesn’t really matter what we think of her calculations; if you’re fed sufficiently wrong information then correct algorithms can lead you to terrible decisions. In case 3, maybe Rivers really didn’t have anything better to do—but only because other circumstances left the victims of this thing in an extraordinarily terrible position to begin with. (In much the same way as sawing off your own healthy left arm can be the best thing to do—if someone is pointing a gun at your head and will definitely kill you if you don’t. That doesn’t say much about the merits of self-amputation in less ridiculous situations.)
I find #3 very implausible, for what it’s worth.
(Now, if the statement were that Rivers believed that the benefits to the community outweighed the risks, and indeed the overt harm, to the subjects of the experiment, that would be more directly to the point. But that’s not what the article says.)
Or (4), she was led to believe, either explicitly or implicitly, that her career and livelihood would be in jeopardy if she did not participate—thus motivating her to subconsciously sabotage her own utility calculations and then convince herself that the sabotaged calculations were valid.
But that might still matter. It may be that utilitarianism produces the best results given no bad information, but something else, like “never permit experimentation without informed consent” would produce better results (on the average) in a world that contains bad information. Especially since whether the latter produces better results will depend on the frequency and nature of the bad information—the more the bad information encourages excess experimentation, the worse utilitarianism comes out in the comparison.
But a good utilitarian will certainly take into account the likelyhood of bad information and act appropriately. Hence the great utilitarian Mill’s advocacy of minimal interference in people’s lives in On Liberty, largely on the basis of the ways that ubiquitous bad information will make well-intentioned interference backfire often enough to make it a lower expected utility strategy in a very wide range of cases.
A competent utilitarian might be able to take into account the limitations of noisy information, maybe even in some way more useful than passivity. That’s not the same class of problem as information which has been deliberately and systematically corrupted by an actual conspiracy in order to lead the utilitarian decisionmaker to the conspiracy’s preferred conclusion.
The cure was discovered after the experiment had been going on for eight years, which complicates matters. At this point, I think her best strategy would have been to arrange for the men to find out about the cure in some way which can’t be traced back to her.
She may have believed that the men would have died more quickly of poverty if they hadn’t been part of the experiment.