This is a question inthe info-cascade question series. There is a prize pool of up to $800 for answers to these questions. See the link above for full background on the problem (including a bibliography) as well as examples of responses we’d be especially excited to see.
Mathematically formalising info-cascades would be great.
Fortunately, it’s already been done in the simple case. See this excellent LW post by Johnicholas, where he uses upvotes/downvotes as an example, and shows that after the second person has voted, all future voters are adding zero new information to the system. His explanation using likelihood ratios is the most intuitive I’ve found.
However, these two entries primarily explain how information cascades when people have to make a binary choice—good or bad, left or right, etc. The question I want to understand is how to think of the problem in a continuous case—do the problems go away? Or more likely, what variables determine the speed at which people update to one extreme? And how far toward that extreme do people go before they realise their error?
Examples of continuous variables include things like project time estimates, stocks, and probabilistic forecasts. I imagine it’s very likely that significant quantitative work has been done on the case of market bubbles, and anyone can write an answer summarising that work and explaining how to apply it to other domains like forecasting, that would be excellent.
[Question] Formalising continuous info cascades? [Info-cascade series]
This is a question in the info-cascade question series. There is a prize pool of up to $800 for answers to these questions. See the link above for full background on the problem (including a bibliography) as well as examples of responses we’d be especially excited to see.
Mathematically formalising info-cascades would be great.
Fortunately, it’s already been done in the simple case. See this excellent LW post by Johnicholas, where he uses upvotes/downvotes as an example, and shows that after the second person has voted, all future voters are adding zero new information to the system. His explanation using likelihood ratios is the most intuitive I’ve found.
The Wikipedia entry on the subject is also quite good.
However, these two entries primarily explain how information cascades when people have to make a binary choice—good or bad, left or right, etc. The question I want to understand is how to think of the problem in a continuous case—do the problems go away? Or more likely, what variables determine the speed at which people update to one extreme? And how far toward that extreme do people go before they realise their error?
Examples of continuous variables include things like project time estimates, stocks, and probabilistic forecasts. I imagine it’s very likely that significant quantitative work has been done on the case of market bubbles, and anyone can write an answer summarising that work and explaining how to apply it to other domains like forecasting, that would be excellent.