Roughly speaking, can we assume that the right thing they should have written as a conclusion in the paper would have been the weaker claim:
“Vitamins X and Y are evil under these daily doses; further studies are needed to confirm if they are beneficial in some other dosage, and if so, which is the optimal one.”
I, too, would like to hear more about the problems of meta-analysis in general. So far it’s naively seemed to me that they’d be more reliable than isolated studies, because they pool a larger amount of results and thus reduce the effect of chance / possible flaws or artifacts in the individual studies.
I think the problem is that each study has to make many arbitrary decisions about aspects of the experimental protocol. This decision will be made the same way for each subject in a single study, but will vary across studies. There are so many such decisions that, if the meta-analysis were to include them as dependent variables, each study would introduce enough new variables to cancel out the statistical power gain of introducing that study.
Very, very briefly (I’m preparing a very long blog post on this, but I want to post it when Dr Hickey, my uncle, releases his book on this, which won’t be for a while yet) - meta-analysis is essentially a method for magnifying the biases of the analyst. When collating the papers, nobody is blinded to anything so it’s very, very easy to remove papers that the people doing the analysis disagree with (approx 1% or fewer of papers that turn up in initial searches end up getting used in most meta-analyses, and these are hand-picked).
On top of this, many of them include additional unpublished (and therefore unreviewed) data from trials included in the analysis. You can easily see how this could cause problems, I’m sure.
There are many, many problems of this nature. I’d strongly recommend everyone do what I did (for a paper analysing these problems) - go to the Cochrane or JAMA sites, and just read every meta-analysis published in a typical year, without any previous prejudice as to the worth or otherwise of the technique. If you can find a single one that appears to be good science, I’d be astonished...
When collating the papers, nobody is blinded to anything so it’s very, very easy to remove papers that the people doing the analysis disagree with...
A good systematic review (meta-analysis is the quantitative component thereof, although the terms are often incorrectly used interchangeably) will define inclusion criteria before beginning the review. Papers are then screened independently by multiple parties to see if they fit these criteria, in attempt to limit introducing bias in the choice of which to include. It shouldn’t be quite as arbitrary as you imply.
On top of this, many of them include additional unpublished (and therefore unreviewed) data from trials included in the analysis.
This is meant to counter publication bias, although it’s fraught with difficulties. Your comment seems to imply that this practice deliberately introduces bias, which is not necessarily the case.
Are you aware of the PRISMA statement? If so, can you suggest improvements to the recommended reporting of systematic reviews?
So you’re doing a meta-analysis to show that meta-analysis doesn’t work?
If your thesis is correct, you should also be able to show that meta-analysis does work, by judicious choice of meta-analyses. Which means that there should be some good meta-analyses out there!
Afraid not, just the abstract is online at the moment (google “Implications and insights for human adaptive mechatronics from developments in algebraic probability theory”—would point you to a link directly, but Google seems to think that my work network is sending automated requests, and has blocked me temporarily).
Then vitamins are not evil, as the paper claims.
Roughly speaking, can we assume that the right thing they should have written as a conclusion in the paper would have been the weaker claim:
“Vitamins X and Y are evil under these daily doses; further studies are needed to confirm if they are beneficial in some other dosage, and if so, which is the optimal one.”
?
It would have been had that been the only problem with the study. See the comments by myself, Dr Steve Hickey, Len Noriega etc here http://www.cochranefeedback.com/cf/cda/feedback.do?DOI=10.1002/14651858.CD007176&reviewGroup=HM-LIVER
Meta-analyses in general are not to be trusted—at all...
I, too, would like to hear more about the problems of meta-analysis in general. So far it’s naively seemed to me that they’d be more reliable than isolated studies, because they pool a larger amount of results and thus reduce the effect of chance / possible flaws or artifacts in the individual studies.
I think the problem is that each study has to make many arbitrary decisions about aspects of the experimental protocol. This decision will be made the same way for each subject in a single study, but will vary across studies. There are so many such decisions that, if the meta-analysis were to include them as dependent variables, each study would introduce enough new variables to cancel out the statistical power gain of introducing that study.
I would love to hear a more detailed discussion of the problems with meta-analysis.
Very, very briefly (I’m preparing a very long blog post on this, but I want to post it when Dr Hickey, my uncle, releases his book on this, which won’t be for a while yet) - meta-analysis is essentially a method for magnifying the biases of the analyst. When collating the papers, nobody is blinded to anything so it’s very, very easy to remove papers that the people doing the analysis disagree with (approx 1% or fewer of papers that turn up in initial searches end up getting used in most meta-analyses, and these are hand-picked). On top of this, many of them include additional unpublished (and therefore unreviewed) data from trials included in the analysis. You can easily see how this could cause problems, I’m sure. There are many, many problems of this nature. I’d strongly recommend everyone do what I did (for a paper analysing these problems) - go to the Cochrane or JAMA sites, and just read every meta-analysis published in a typical year, without any previous prejudice as to the worth or otherwise of the technique. If you can find a single one that appears to be good science, I’d be astonished...
A good systematic review (meta-analysis is the quantitative component thereof, although the terms are often incorrectly used interchangeably) will define inclusion criteria before beginning the review. Papers are then screened independently by multiple parties to see if they fit these criteria, in attempt to limit introducing bias in the choice of which to include. It shouldn’t be quite as arbitrary as you imply.
This is meant to counter publication bias, although it’s fraught with difficulties. Your comment seems to imply that this practice deliberately introduces bias, which is not necessarily the case.
Are you aware of the PRISMA statement? If so, can you suggest improvements to the recommended reporting of systematic reviews?
So you’re doing a meta-analysis to show that meta-analysis doesn’t work?
If your thesis is correct, you should also be able to show that meta-analysis does work, by judicious choice of meta-analyses. Which means that there should be some good meta-analyses out there!
Do you have an online copy of this paper? Sounds like my kind of thing.
Afraid not, just the abstract is online at the moment (google “Implications and insights for human adaptive mechatronics from developments in algebraic probability theory”—would point you to a link directly, but Google seems to think that my work network is sending automated requests, and has blocked me temporarily).
That title will turn away medical people.
Wasn’t my title ;)
Thanks!