I’m confused. I parsed this as “papers which contain no citations are considered bad sources,” but it seems that everyone else is parsing it as “papers which have not been cited are considered bad sources.” Am I making a mistake here? The latter doesn’t make much sense to me, but Zed hasn’t stepped in to correct that interpretation.
Look at the context of the first two paragraphs and the comment that Zed was replying to. The discussion was about how many papers never get cited at all. In that context, he seems to be talking about people not citing papers unless they have already been cited.
It’s not clear to me that he was talking about studies being ignored because they’re not interesting enough to cite, rather than studies being ignored because they’re not trustworthy enough to cite.
In the case of only citing papers that contain numerous citations, this is helpful if the papers contain many redundant citations, demonstrating that the factual claims have been replicated, but if a paper relies on many uncertain findings, then its own uncertainty will be multiplied. The conclusion is at most as strong as its weakest link.
I’m confused. I parsed this as “papers which contain no citations are considered bad sources,” but it seems that everyone else is parsing it as “papers which have not been cited are considered bad sources.” Am I making a mistake here? The latter doesn’t make much sense to me, but Zed hasn’t stepped in to correct that interpretation.
Look at the context of the first two paragraphs and the comment that Zed was replying to. The discussion was about how many papers never get cited at all. In that context, he seems to be talking about people not citing papers unless they have already been cited.
It’s not clear to me that he was talking about studies being ignored because they’re not interesting enough to cite, rather than studies being ignored because they’re not trustworthy enough to cite.
In any case, I think both are dubious safety mechanisms. John Ioannidis found that even most the most commonly cited studies in medical research are highly likely to be false. If researchers are basing their trust in studies on the rate at which they’re cited, they’re likely to be subject to information cascades, double counting the information that led other researchers to cite the same study.
In the case of only citing papers that contain numerous citations, this is helpful if the papers contain many redundant citations, demonstrating that the factual claims have been replicated, but if a paper relies on many uncertain findings, then its own uncertainty will be multiplied. The conclusion is at most as strong as its weakest link.