I only had time to double-check one of the scary links at the top, and I wasn’t too impressed with what I found:
In 2010, a careful review showed that published industry-sponsored trials are four times more likely to show positive results than published independent studies, even though the industry-sponsored trials tend to use better experimental designs.
But the careful review you link to claims that studies funded by the industry report 85% positive results, compared to 72% positive by independent organizations and 50% positive by government—which is not what I think of when I hear four times! They also give a lot of reasons to think the difference may be benign: industry tends to do different kinds of studies than independent orgs. The industry studies are mainly Phase III/IV—a part of the approval process where drugs that have already been shown to work in smaller studies are tested on a larger population; the nonprofit and government studies are more often Phase I/II—the first check to see whether a promising new chemical works at all. It makes sense that studies on a drug which has already been found to probably work are more positive than the first studies on a totally new chemical. And the degree to which pharma studies are more likely to be late-phase is greater than the degree to which pharma companies are more likely to show positive results, and the article doesn’t give stats comparing like to like! The same review finds with p < .001 that pharma studies are bigger, which again would make them more likely to find a result where one exists.
The only mention of the “4x more likely” number is buried in the Discussion section and cites a completely different study, Lexchin et al.
Lexchin reports an odds ratio of 4, which I think is what your first study meant when they say “industry studies are four times more likely to be positive”. Odds ratios have always been one of my least favorite statistical concepts, and I always feel like I’m misunderstanding them somehow, but I don’t think “odds ratio of 4″ and “four times more likely” are connotatively similar (someone smarter, please back me up on this?!). For example, the largest study in Lexchin’s meta-analysis, Yaphe et al, finds that 87% of industry studies are positive versus 65% of independent studies, for an odds ratio of 3.45x. But when I hear something like “X is four times more likely than Y”, I think of Y being 20% likely and X being 80% likely; not 65% vs. 87%.
This means Lexchin’s results are very very similar to those of the original study you cite, which provides some confirmation that those are probably the true numbers. Lexchin also provides another hypothesis for what’s going on. He says that “the research methods of trials sponsored by drug companies is at least as good as that of non-industry funded research and in many cases better”, but that along with publication bias, industry fudges the results by comparing their drug to another drug, and then giving the other drug wrong. For example, if your company makes Drug X, you sponsor a study to prove that it’s better than Drug Y, but give patients Drug Y at a dose that’s too low to do any good (or so high that it produces side effects). Then they conduct that study absolutely perfectly and get the correct result that their drug is better than another drug at the wrong dosage. This doesn’t seem like the sort of thing Bayesian statistics could fix; in fact, it sounds like it means study interpretation would require domain-specific medical knowledge; someone who could say “Wait a second, that’s not how we usually give penicillin!” I don’t know whether this means industry studies that compare their drug against a placebo are more trustworthy.
So, summary. Industry studies seem to hover around 85% positive, non-industry studies around 65%. Part of this is probably because industry studies are more likely to be on drugs that there’s already some evidence that they work, and not due to scientific misconduct at all. More of it is due to publication bias and to getting the right answer to a wrong question like “Does this work better than another drug when the other is given improperly?”.
Phrases like “Industry studies are four times more likely to show positive results” are connotatively inaccurate and don’t support any of these proposals at all, except maybe the one to reduce publication bias.
This reinforces my prejudice that a lot of the literature on how misleading the literature is, is itself among the best examples of how misleading the literature is.
Yes, “four times as likely” is not the same as an odds ratio of four. And the problem here is the same as the problem in army1987′s LL link that odds ratios get mangled in transmission.
But I like odds ratios. In the limit of small probability, odds ratios are the same as “times as likely.” But there’s nothing 4x as likely as 50%. Does that mean that 50% is very similar to all larger probabilities? Odds ratios are unchanged (or inverted) by taking complements: 4% to 1% is an odds ratio of about 4; 99% to 96% is also 4 (actually 4.1 in both cases). Complementation is exactly what’s going on here. The drug companies get 1.2x-1.3x more positive results than the independent studies. That doesn’t sound so big, but everyone is likely to get positive results. If we speak in terms of negative results, the independent studies are 2-3x likely to get negative results as the drug companies. Now it sounds like a big effect.
Odds ratios give a canonical distance between probabilities that doesn’t let people cherry-pick between 34% more positives and 3x more negatives. They give us a way to compare any two probabilities that is the obvious one for very small probabilities and is related to the obvious one for very large probabilities. The cost of interpolating between the ends is that they are confusing in the middle. In particular, this “3x more negatives” turns into an odds ratio of 4.
Sometimes 50% really is similar to all larger probabilities. Sometimes you have a specific view on things and should use that, rather than the off the shelf odd ratio. But that doesn’t seem to be true here.
Thank you for this. I’ve always been frustrated with odds ratios, but somehow it never occurred to me that they have the beautiful and useful property you describe.
I don’t know as much about odds ratios as I would like to, but you’ve convinced me that they’re something I should learn thoroughly, ASAP. Does anybody have a link to a good explanation of them?
This reinforces my prejudice that a lot of the literature on how misleading the literature is, is itself among the best examples of how misleading the literature is.
At the least, it allows one to argue that the claim “scientific papers are generally reliable” is self-undermining. The prior probability is also high, given the revolving door of “study of the week” science reporting we all are regularly exposed to.
This reinforces my prejudice that a lot of the literature on how misleading the literature is, is itself among the best examples of how misleading the literature is.
A lot of the literature on cognitive biases is itself among the best examples of how biased people are (though unfortunately not usually in ways that would prove their point, with the obvious exception of confirmation bias).
I only had time to double-check one of the scary links at the top, and I wasn’t too impressed with what I found:
But the careful review you link to claims that studies funded by the industry report 85% positive results, compared to 72% positive by independent organizations and 50% positive by government—which is not what I think of when I hear four times! They also give a lot of reasons to think the difference may be benign: industry tends to do different kinds of studies than independent orgs. The industry studies are mainly Phase III/IV—a part of the approval process where drugs that have already been shown to work in smaller studies are tested on a larger population; the nonprofit and government studies are more often Phase I/II—the first check to see whether a promising new chemical works at all. It makes sense that studies on a drug which has already been found to probably work are more positive than the first studies on a totally new chemical. And the degree to which pharma studies are more likely to be late-phase is greater than the degree to which pharma companies are more likely to show positive results, and the article doesn’t give stats comparing like to like! The same review finds with p < .001 that pharma studies are bigger, which again would make them more likely to find a result where one exists.
The only mention of the “4x more likely” number is buried in the Discussion section and cites a completely different study, Lexchin et al.
Lexchin reports an odds ratio of 4, which I think is what your first study meant when they say “industry studies are four times more likely to be positive”. Odds ratios have always been one of my least favorite statistical concepts, and I always feel like I’m misunderstanding them somehow, but I don’t think “odds ratio of 4″ and “four times more likely” are connotatively similar (someone smarter, please back me up on this?!). For example, the largest study in Lexchin’s meta-analysis, Yaphe et al, finds that 87% of industry studies are positive versus 65% of independent studies, for an odds ratio of 3.45x. But when I hear something like “X is four times more likely than Y”, I think of Y being 20% likely and X being 80% likely; not 65% vs. 87%.
This means Lexchin’s results are very very similar to those of the original study you cite, which provides some confirmation that those are probably the true numbers. Lexchin also provides another hypothesis for what’s going on. He says that “the research methods of trials sponsored by drug companies is at least as good as that of non-industry funded research and in many cases better”, but that along with publication bias, industry fudges the results by comparing their drug to another drug, and then giving the other drug wrong. For example, if your company makes Drug X, you sponsor a study to prove that it’s better than Drug Y, but give patients Drug Y at a dose that’s too low to do any good (or so high that it produces side effects). Then they conduct that study absolutely perfectly and get the correct result that their drug is better than another drug at the wrong dosage. This doesn’t seem like the sort of thing Bayesian statistics could fix; in fact, it sounds like it means study interpretation would require domain-specific medical knowledge; someone who could say “Wait a second, that’s not how we usually give penicillin!” I don’t know whether this means industry studies that compare their drug against a placebo are more trustworthy.
So, summary. Industry studies seem to hover around 85% positive, non-industry studies around 65%. Part of this is probably because industry studies are more likely to be on drugs that there’s already some evidence that they work, and not due to scientific misconduct at all. More of it is due to publication bias and to getting the right answer to a wrong question like “Does this work better than another drug when the other is given improperly?”.
Phrases like “Industry studies are four times more likely to show positive results” are connotatively inaccurate and don’t support any of these proposals at all, except maybe the one to reduce publication bias.
This reinforces my prejudice that a lot of the literature on how misleading the literature is, is itself among the best examples of how misleading the literature is.
Yes, “four times as likely” is not the same as an odds ratio of four. And the problem here is the same as the problem in army1987′s LL link that odds ratios get mangled in transmission.
But I like odds ratios. In the limit of small probability, odds ratios are the same as “times as likely.” But there’s nothing 4x as likely as 50%. Does that mean that 50% is very similar to all larger probabilities? Odds ratios are unchanged (or inverted) by taking complements: 4% to 1% is an odds ratio of about 4; 99% to 96% is also 4 (actually 4.1 in both cases). Complementation is exactly what’s going on here. The drug companies get 1.2x-1.3x more positive results than the independent studies. That doesn’t sound so big, but everyone is likely to get positive results. If we speak in terms of negative results, the independent studies are 2-3x likely to get negative results as the drug companies. Now it sounds like a big effect.
Odds ratios give a canonical distance between probabilities that doesn’t let people cherry-pick between 34% more positives and 3x more negatives. They give us a way to compare any two probabilities that is the obvious one for very small probabilities and is related to the obvious one for very large probabilities. The cost of interpolating between the ends is that they are confusing in the middle. In particular, this “3x more negatives” turns into an odds ratio of 4.
Sometimes 50% really is similar to all larger probabilities. Sometimes you have a specific view on things and should use that, rather than the off the shelf odd ratio. But that doesn’t seem to be true here.
Thank you for this. I’ve always been frustrated with odds ratios, but somehow it never occurred to me that they have the beautiful and useful property you describe.
I don’t know as much about odds ratios as I would like to, but you’ve convinced me that they’re something I should learn thoroughly, ASAP. Does anybody have a link to a good explanation of them?
http://lesswrong.com/lw/8lr/logodds_or_logits/ would be helpful for you, I think, since an explanation/introduction was the stated goal.
Sorry, I don’t have any sources. If you want suggestions from other people, you should try the open thread.
Some related words that may be helpful in searching for material are logit and logistic (regression).
Thanks for this. I’ve removed the offending sentence.
Language Log: Thou shalt not report odds ratios
Or if you want to appropriate a different popular phrase, “Never tell me the odds ratio!”
At the least, it allows one to argue that the claim “scientific papers are generally reliable” is self-undermining. The prior probability is also high, given the revolving door of “study of the week” science reporting we all are regularly exposed to.
A lot of the literature on cognitive biases is itself among the best examples of how biased people are (though unfortunately not usually in ways that would prove their point, with the obvious exception of confirmation bias).
Seems like both teaching about biases and learning about biases is dangerous.