Informed consent bias in RCTs?
The problem of published research findings not being reliable has been discussed here before.
One problem with RCTs that has received little attention is that, due to informed consent laws and ethical considerations, subjects are aware that they might be receiving sham therapy. This differs from the environment outside of the research setting, where people are confident that whatever their doctor prescribes is what they will get from their pharmacist. I can imagine many ways in which subjects’ uncertainty about treatment assignment could affect outcomes (adherence is one possible mechanism). I wrote a short paper about this, focusing out what we would ideally estimate if we could lie to subjects, versus what we actually can estimate in RCTs (link). Here is the abstract:
It is widely recognized that traditional randomized controlled trials (RCTs) have limited generalizability due to the numerous ways in which conditions of RCTs differ from those experienced each day by patients and physicians. As a result, there has been a recent push towards pragmatic trials that better mimic real-world conditions. One way in which RCTs differ from normal everyday experience is that all patients in the trial have uncertainty about what treatment they were assigned. Outside of the RCT setting, if a patient is prescribed a drug then there is no reason for them to wonder if it is a placebo. Uncertainty about treatment assignment could affect both treatment and placebo response. We use a potential outcomes approach to define relevant causal effects based on combinations of treatment assignment and belief about treatment assignment. We show that traditional RCTs are designed to estimate a quantity that is typically not of primary interest. We propose a new study design that has the potential to provide information about a wider range of interesting causal effects
Any thoughts on this? Is this a trivial technical issue or something worth addressing?
I wonder, are you allowed to sign people up to be part of a study within some parameters within some time period (e.g. “you’re signing up to participate in a study in which we may lie to you sometime in the next year”)?
Probably not. I’ve heard (from experimenters) that just getting economic experiments approved by a university institutional review board, where subjects are paid to sit in front of a computer and play a relatively simple game often with a chat feature to communicate with other subjects, can be problematic—and there’s no deception going on in these.
I’d worry much more about professional clinical subjects corrupting results.
And as I noted in the comments there, the corruption goes all the way up.
In the days when there were no good treatments for AIDS, patients would conspire to see who had the drug and who had the placebo, so they could quit if they were getting the placebo and try to sign up for a study where they might get the drug. You may (rightly) say that’s missing the point of a RCT, but the point is that desperate people aren’t going to be in it for the science. So if you’re blaming the subjects, the problem then is how to find a population of human medical test subjects who have the proper scientific disinterest. (Let alone the researchers.)
So, LessWrong readers: what would convince you to sign up as a guinea pig?
The use of placebo controls in experiments is not required to make a valid experiment. If you tell the research subjects that there are, say, 4 treatment regimens which the doctors have roughly equal expectation to work, and they’re randomly assigned to them, then there’s a lot less interference of the sort described above, and the doctors can tell the truth, and the experiment still yields useful information.