Very, very briefly (I’m preparing a very long blog post on this, but I want to post it when Dr Hickey, my uncle, releases his book on this, which won’t be for a while yet) - meta-analysis is essentially a method for magnifying the biases of the analyst. When collating the papers, nobody is blinded to anything so it’s very, very easy to remove papers that the people doing the analysis disagree with (approx 1% or fewer of papers that turn up in initial searches end up getting used in most meta-analyses, and these are hand-picked).
On top of this, many of them include additional unpublished (and therefore unreviewed) data from trials included in the analysis. You can easily see how this could cause problems, I’m sure.
There are many, many problems of this nature. I’d strongly recommend everyone do what I did (for a paper analysing these problems) - go to the Cochrane or JAMA sites, and just read every meta-analysis published in a typical year, without any previous prejudice as to the worth or otherwise of the technique. If you can find a single one that appears to be good science, I’d be astonished...
When collating the papers, nobody is blinded to anything so it’s very, very easy to remove papers that the people doing the analysis disagree with...
A good systematic review (meta-analysis is the quantitative component thereof, although the terms are often incorrectly used interchangeably) will define inclusion criteria before beginning the review. Papers are then screened independently by multiple parties to see if they fit these criteria, in attempt to limit introducing bias in the choice of which to include. It shouldn’t be quite as arbitrary as you imply.
On top of this, many of them include additional unpublished (and therefore unreviewed) data from trials included in the analysis.
This is meant to counter publication bias, although it’s fraught with difficulties. Your comment seems to imply that this practice deliberately introduces bias, which is not necessarily the case.
Are you aware of the PRISMA statement? If so, can you suggest improvements to the recommended reporting of systematic reviews?
So you’re doing a meta-analysis to show that meta-analysis doesn’t work?
If your thesis is correct, you should also be able to show that meta-analysis does work, by judicious choice of meta-analyses. Which means that there should be some good meta-analyses out there!
Afraid not, just the abstract is online at the moment (google “Implications and insights for human adaptive mechatronics from developments in algebraic probability theory”—would point you to a link directly, but Google seems to think that my work network is sending automated requests, and has blocked me temporarily).
I would love to hear a more detailed discussion of the problems with meta-analysis.
Very, very briefly (I’m preparing a very long blog post on this, but I want to post it when Dr Hickey, my uncle, releases his book on this, which won’t be for a while yet) - meta-analysis is essentially a method for magnifying the biases of the analyst. When collating the papers, nobody is blinded to anything so it’s very, very easy to remove papers that the people doing the analysis disagree with (approx 1% or fewer of papers that turn up in initial searches end up getting used in most meta-analyses, and these are hand-picked). On top of this, many of them include additional unpublished (and therefore unreviewed) data from trials included in the analysis. You can easily see how this could cause problems, I’m sure. There are many, many problems of this nature. I’d strongly recommend everyone do what I did (for a paper analysing these problems) - go to the Cochrane or JAMA sites, and just read every meta-analysis published in a typical year, without any previous prejudice as to the worth or otherwise of the technique. If you can find a single one that appears to be good science, I’d be astonished...
A good systematic review (meta-analysis is the quantitative component thereof, although the terms are often incorrectly used interchangeably) will define inclusion criteria before beginning the review. Papers are then screened independently by multiple parties to see if they fit these criteria, in attempt to limit introducing bias in the choice of which to include. It shouldn’t be quite as arbitrary as you imply.
This is meant to counter publication bias, although it’s fraught with difficulties. Your comment seems to imply that this practice deliberately introduces bias, which is not necessarily the case.
Are you aware of the PRISMA statement? If so, can you suggest improvements to the recommended reporting of systematic reviews?
So you’re doing a meta-analysis to show that meta-analysis doesn’t work?
If your thesis is correct, you should also be able to show that meta-analysis does work, by judicious choice of meta-analyses. Which means that there should be some good meta-analyses out there!
Do you have an online copy of this paper? Sounds like my kind of thing.
Afraid not, just the abstract is online at the moment (google “Implications and insights for human adaptive mechatronics from developments in algebraic probability theory”—would point you to a link directly, but Google seems to think that my work network is sending automated requests, and has blocked me temporarily).
That title will turn away medical people.
Wasn’t my title ;)
Thanks!