The ethical principles of experimenting on human beings are pretty subtle. It’s not just about protecting from quackery, though he is right that there is a legacy of Nuremburg involved. Read, for example, the guides that the Institutional Review Boards that approve scientific research must follow.
*Respect for persons involves a recognition of the personal dignity and autonomy of individuals, and special protection of those persons with diminished autonomy.
*Beneficence entails an obligation to protect persons from harm by maximizing anticipated benefits and minimizing possible risks of harm.
*Justice requires that the benefits and burdens of research be distributed fairly.
The most relevant principle here is “beneficience”. Unless the experimenter can claim to be in equipoise, about which of two procedures will be more beneficial, they’re obligated to use the presumed better option (which means no randomization). You can get away with more in pursuit of practice than you can in pursuit of research, but practice is deliberately restricted to prevent obtaining generalizable knowledge.
Roughly put, society has decided that it would rather that the only experiments that we perform are ones where there’s no appreciable possibility of harm to the participants, than allow that it is sometimes necessary for the progress of science that noble volunteers try things which we can’t be sure are good, and might be expected to be a bit worse, so that society can learn when they turn out to be better, or when they teach us things that suggest the better option. In a more rational society, everyone would have to accept that their treatment might not be the best possible for them (according to our current state of ignorance), but would require that the treatment be designed in order to lead to generalizable knowledge for the future.
The ethical principles of experimenting on human beings are pretty subtle. It’s not just about protecting from quackery, though he is right that there is a legacy of Nuremburg involved. Read, for example, the guides that the Institutional Review Boards that approve scientific research must follow.
*Respect for persons involves a recognition of the personal dignity and autonomy of individuals, and special protection of those persons with diminished autonomy.
*Beneficence entails an obligation to protect persons from harm by maximizing anticipated benefits and minimizing possible risks of harm.
*Justice requires that the benefits and burdens of research be distributed fairly.
The most relevant principle here is “beneficience”. Unless the experimenter can claim to be in equipoise, about which of two procedures will be more beneficial, they’re obligated to use the presumed better option (which means no randomization). You can get away with more in pursuit of practice than you can in pursuit of research, but practice is deliberately restricted to prevent obtaining generalizable knowledge.
Roughly put, society has decided that it would rather that the only experiments that we perform are ones where there’s no appreciable possibility of harm to the participants, than allow that it is sometimes necessary for the progress of science that noble volunteers try things which we can’t be sure are good, and might be expected to be a bit worse, so that society can learn when they turn out to be better, or when they teach us things that suggest the better option. In a more rational society, everyone would have to accept that their treatment might not be the best possible for them (according to our current state of ignorance), but would require that the treatment be designed in order to lead to generalizable knowledge for the future.