My thinking is: Given that a scientist has read (or looked at) a paper, they’re more likely to cite it if it’s correct and useful than if it’s incorrect. (I’m assuming that affirmative citations are more common than “X & Y said Z but they’re wrong because...” citations.) If that were all that happened, then the number of citations a paper gets would be strongly correlated with its correctness, and we would expect it to be rare for a bad paper to get a lot of citations. However, if we take into account the fact that citations are also used by other scientists as a reading list, then a paper that has already been cited a lot will be read by a lot of people, of whom some will cite it.
So when a paper is published, there are two forces affecting the number of citations it gets. First, the “badness effect” (“This paper sounds iffy, so I won’t cite it”) pushes down the number of citations. Second, the “popularity effect” (a lot of people have read the paper, so a lot of people are potential citers) pushes up the number of citations. The magnitude of the popularity effect depends mostly on what happens soon after publication, when readership is small and thus more subject to random variation. Of course, for blatantly erroneous papers the badness effect will still predominate, but in marginal cases (like the aphasia example) the popularity effect can swamp the badness effect. Hence we would expect to see more bad papers getting widely cited; the more obviously bad they are, the stronger this suggests the popularity effect is.
I suppose one could create a computer simulation if one were interested; I would predict results similar to Simkin & Roychowdhury’s.
I see: in the case that a paper is read, deciding a paper sounds iffy and deciding not to cite it would correlate strongly with deciding not to cite a paper with wrong conclusions.
I was considering that scientists rarely check the conclusions of the papers they cite by reading them, but just decide based on writing and other signals whether the source is credible. So a well-written paper with a wrong conclusion could get continued citations. But indeed, if the paper is written carefully and the methodology convincing, it would be less likely that the conclusion is wrong.
My thinking is: Given that a scientist has read (or looked at) a paper, they’re more likely to cite it if it’s correct and useful than if it’s incorrect. (I’m assuming that affirmative citations are more common than “X & Y said Z but they’re wrong because...” citations.) If that were all that happened, then the number of citations a paper gets would be strongly correlated with its correctness, and we would expect it to be rare for a bad paper to get a lot of citations. However, if we take into account the fact that citations are also used by other scientists as a reading list, then a paper that has already been cited a lot will be read by a lot of people, of whom some will cite it.
So when a paper is published, there are two forces affecting the number of citations it gets. First, the “badness effect” (“This paper sounds iffy, so I won’t cite it”) pushes down the number of citations. Second, the “popularity effect” (a lot of people have read the paper, so a lot of people are potential citers) pushes up the number of citations. The magnitude of the popularity effect depends mostly on what happens soon after publication, when readership is small and thus more subject to random variation. Of course, for blatantly erroneous papers the badness effect will still predominate, but in marginal cases (like the aphasia example) the popularity effect can swamp the badness effect. Hence we would expect to see more bad papers getting widely cited; the more obviously bad they are, the stronger this suggests the popularity effect is.
I suppose one could create a computer simulation if one were interested; I would predict results similar to Simkin & Roychowdhury’s.
I see: in the case that a paper is read, deciding a paper sounds iffy and deciding not to cite it would correlate strongly with deciding not to cite a paper with wrong conclusions.
I was considering that scientists rarely check the conclusions of the papers they cite by reading them, but just decide based on writing and other signals whether the source is credible. So a well-written paper with a wrong conclusion could get continued citations. But indeed, if the paper is written carefully and the methodology convincing, it would be less likely that the conclusion is wrong.