Fairly often. One strategy I’ve seen is to compare meta-analyses to a later very-large study (rare for obvious reasons when dealing with RCTs) and seeing how often the confidence interval is blown; usually much higher than it should be. (The idea is that the larger study will give a higher-precision result which is a ‘ground truth’ or oracle for the meta-analysis’s estimate, and if it’s later, it will not have been included in the meta-analysis and also cannot have led the meta-analysts into Milliken-style distorting their results to get the ‘right’ answer.)
Results: We identified 12 large randomized, controlled trials and 19 meta-analyses addressing the same questions. For a total of 40 primary and secondary outcomes, agreement between the meta-analyses and the large clinical trials was only fair (kappa ϭ 0.35; 95% confidence interval, 0.06-0.64). The positive predictive value of the meta-analyses was 68%, and the negative predictive value 67%. However, the difference in point estimates between the randomized trials and the meta-analyses was statistically significant for only 5 of the 40 comparisons (12%). Furthermore, in each case of disagreement a statistically significant effect of treatment was found by one method, whereas no statistically significant effect was found by the other.
Fairly often. One strategy I’ve seen is to compare meta-analyses to a later very-large study (rare for obvious reasons when dealing with RCTs) and seeing how often the confidence interval is blown; usually much higher than it should be. (The idea is that the larger study will give a higher-precision result which is a ‘ground truth’ or oracle for the meta-analysis’s estimate, and if it’s later, it will not have been included in the meta-analysis and also cannot have led the meta-analysts into Milliken-style distorting their results to get the ‘right’ answer.)
For example: LeLorier J, Gregoire G, Benhaddad A, Lapierre J, Derderian F. “Discrepancies between meta-analyses and subsequent large randomized, controlled trials”. N Engl J Med 1997;337:536e42
(You can probably dig up more results looking through reverse citations of that paper, since it seems to be the originator of this criticism. And also, although I disagree with a lot of it, “Combining heterogenous studies using the random-effects model is a mistake and leads to inconclusive meta-analyses”, Al khalaf et al 2010.)
I’m not sure how much to trust these meta-meta analyses. If only someone would aggregate them and test their accuracy against a control.