Great post, I broadly agree that prediction is an underused tool for coordination.
Example 4: avoiding info-cascades
Info-cascades occur when people update off of each others beliefs without appropriately sharing the evidence for those beliefs, and the same pieces of evidence end up “double-counted”. Having better system for tracking who believes what and why could help solve this, and prediction systems could be one way of doing so.
One danger of using prediction as a coordination mechanism is actually creating feedback loops that act very similar to info cascades. For instance, lets say a group of experts says that a particular research direction is likely to yield insights. The community updates that certain avenues for research are more likely, and therefore allocates more grant funds to those areas. The more grant funds then cause them to be more sure that that particular domain will yield insights, which cause more grant funds to be allocated.
That’s bad, but it gets worse. Now that the experts know that the community updates on their predictions, they make their predictions even stronger next time. Even if in the counterfactual world where they didn’t make their prediction, they’d be only 50% sure that there would be insights in the next 5 years from a particular world, in the world where they predict 50%, they think the actual chance will be something like 60%, so this sort of cascade, if they want to be accurate, causes them to make the prediction start at 75% due to their knowledge about the prediction’s impact on the world.
Great post, I broadly agree that prediction is an underused tool for coordination.
One danger of using prediction as a coordination mechanism is actually creating feedback loops that act very similar to info cascades. For instance, lets say a group of experts says that a particular research direction is likely to yield insights. The community updates that certain avenues for research are more likely, and therefore allocates more grant funds to those areas. The more grant funds then cause them to be more sure that that particular domain will yield insights, which cause more grant funds to be allocated.
That’s bad, but it gets worse. Now that the experts know that the community updates on their predictions, they make their predictions even stronger next time. Even if in the counterfactual world where they didn’t make their prediction, they’d be only 50% sure that there would be insights in the next 5 years from a particular world, in the world where they predict 50%, they think the actual chance will be something like 60%, so this sort of cascade, if they want to be accurate, causes them to make the prediction start at 75% due to their knowledge about the prediction’s impact on the world.