An information cascade is a problem in group rationality. Wikipedia has excellent introductions and links about the phenomenon, but here is a meta-ish example using likelihood ratios.
Suppose in some future version of this site, there are several well-known facts:
All posts come in two kinds, high quality (insightful and relevant) and low quality (old ideas rehashed, long hypotheticals).
There is a well-known prior 60% chance of anything being high quality, rather than low quality. (We’re doing well!)
Readers get a private signal, either “high” or “low”, their personal judgement of quality, which is wrong 20% of the time.
The number of up and down votes is displayed next to each post. (Note the difference from the present system, which only displays up minus down. This hypothesis makes the math easier.)
Readers are competent in Bayesian statistics and strive to vote the true quality of the post.
Let’s talk about how the very first reader would vote. If they judged the post high quality, then they would multiply the prior likelihood ratio (6:4) times the bayes factor for a high private signal (4:1), get (6*4:4*1) = (6:1) and vote the post up. If they judged the post low quality then they would instead multiply by the bayes factor for a low private signal (1:4), get (6*1:4*4) = (3:8) and vote the post down.
There were two scenarios for the first reader (private information high or low). If we speculate that the first reader did in fact vote up, then there are two scenarios for the second scenario: There are two scenarios for the second reader:
Personal judgement high: (6:4)*(4:1)*(4:1) = (24:1), vote up.
Personal judgement low: (6:4)*(1:4)*(4:1) = (6:4), vote up against personal judgement.
Note that now there are two explanations for ending up two votes up. It could be that the second reader actually agreed, or it could be that the second reader was following the first reader and the prior against their personal judgement. That means that the third reader gets zero information from the second reader’s personal judgement! The two scenarios for the third reader, and every future reader, are exactly analogous to the two scenarios for the second reader.
Personal judgement high: (6:4)*(4:1)*(4:1) = (24:1), vote up.
Personal judgement low: (6:4)*(1:4)*(4:1) = (6:4), vote up against personal judgement.
This has been a nightmare scenario of groupthink afflicting even diligent bayesians. Possible conclusions:
Don’t strive to vote the true quality of the post, strive to vote your personal judgement.
Try to avoid even noticing the score. (Maybe scores could even be occluded, like spoiler-text?)
Information cascades are dangerous and interesting. We should develop good cognitive citizenship techniques.
Broadcast novel evidence, not conclusions.
Note: Olle found an error that necessitated a rewrite. I apologize.
Information cascades
An information cascade is a problem in group rationality. Wikipedia has excellent introductions and links about the phenomenon, but here is a meta-ish example using likelihood ratios.
Suppose in some future version of this site, there are several well-known facts:
All posts come in two kinds, high quality (insightful and relevant) and low quality (old ideas rehashed, long hypotheticals).
There is a well-known prior 60% chance of anything being high quality, rather than low quality. (We’re doing well!)
Readers get a private signal, either “high” or “low”, their personal judgement of quality, which is wrong 20% of the time.
The number of up and down votes is displayed next to each post. (Note the difference from the present system, which only displays up minus down. This hypothesis makes the math easier.)
Readers are competent in Bayesian statistics and strive to vote the true quality of the post.
Let’s talk about how the very first reader would vote. If they judged the post high quality, then they would multiply the prior likelihood ratio (6:4) times the bayes factor for a high private signal (4:1), get (6*4:4*1) = (6:1) and vote the post up. If they judged the post low quality then they would instead multiply by the bayes factor for a low private signal (1:4), get (6*1:4*4) = (3:8) and vote the post down.
There were two scenarios for the first reader (private information high or low). If we speculate that the first reader did in fact vote up, then there are two scenarios for the second scenario: There are two scenarios for the second reader:
Personal judgement high: (6:4)*(4:1)*(4:1) = (24:1), vote up.
Personal judgement low: (6:4)*(1:4)*(4:1) = (6:4), vote up against personal judgement.
Note that now there are two explanations for ending up two votes up. It could be that the second reader actually agreed, or it could be that the second reader was following the first reader and the prior against their personal judgement. That means that the third reader gets zero information from the second reader’s personal judgement! The two scenarios for the third reader, and every future reader, are exactly analogous to the two scenarios for the second reader.
Personal judgement high: (6:4)*(4:1)*(4:1) = (24:1), vote up.
Personal judgement low: (6:4)*(1:4)*(4:1) = (6:4), vote up against personal judgement.
This has been a nightmare scenario of groupthink afflicting even diligent bayesians. Possible conclusions:
Don’t strive to vote the true quality of the post, strive to vote your personal judgement.
Try to avoid even noticing the score. (Maybe scores could even be occluded, like spoiler-text?)
Information cascades are dangerous and interesting. We should develop good cognitive citizenship techniques.
Broadcast novel evidence, not conclusions.
Note: Olle found an error that necessitated a rewrite. I apologize.