Nissen et al 2016 (“Publication bias and the canonization of false facts”) give a simple model for how publication bias in academic research can have a similar effect to the “information cascades” described in the OP. False scientific claims are likely to be falsified by an experiment, but will sometimes be found to be true. Positive results supporting a claim may be more likely to be published than negative results against the claim. The authors’ model assumes that the credence of the scientific community in the claim is determined by the number of published positive and negative results, and that new studies will be done to repeatedly test the claim until the credence becomes sufficiently close to 0 or 1. The publication bias favoring false results can overpower the odds against getting a positive result in any given experimental replication and lead to false claims becoming canonized as fact with a non-negligible probability.
The mechanism here differs in a sense from the “information cascade” examples in the OP and on the Wikipedia page in that the false claim is being repeatedly tested with new experiments. However, I think it could be seen as fundamentally the same as the citation bias example of Greenberg 2009 in the OP, if we think of the scientific community rather than an individual scientist as being the actor. In the Greenberg 2009 example, the problem is that individual scientists tend only to cite positive findings; in the Nissen et al model, the scientific community tends to only publish positive findings. (Of course, this second problem feeds into the first.)
Nissen et al 2016 (“Publication bias and the canonization of false facts”) give a simple model for how publication bias in academic research can have a similar effect to the “information cascades” described in the OP. False scientific claims are likely to be falsified by an experiment, but will sometimes be found to be true. Positive results supporting a claim may be more likely to be published than negative results against the claim. The authors’ model assumes that the credence of the scientific community in the claim is determined by the number of published positive and negative results, and that new studies will be done to repeatedly test the claim until the credence becomes sufficiently close to 0 or 1. The publication bias favoring false results can overpower the odds against getting a positive result in any given experimental replication and lead to false claims becoming canonized as fact with a non-negligible probability.
The mechanism here differs in a sense from the “information cascade” examples in the OP and on the Wikipedia page in that the false claim is being repeatedly tested with new experiments. However, I think it could be seen as fundamentally the same as the citation bias example of Greenberg 2009 in the OP, if we think of the scientific community rather than an individual scientist as being the actor. In the Greenberg 2009 example, the problem is that individual scientists tend only to cite positive findings; in the Nissen et al model, the scientific community tends to only publish positive findings. (Of course, this second problem feeds into the first.)