If they were independent, then it would be trivial to update on each of them and arrive at a meta-forecast much greater than 80%. But they’re really not. Many of them are based on the same polls, news, and historical behaviors. They may have different models, but they’re very much not independent forecasts.
If they were independent … But they’re really not.
I agree. That’s why calculating the “combined” forecast is hard—you need to estimate the degree of co-dependency. But as long as the forecasts are not exactly the same, each new one gets you a (metaphorical) bit of information and your posterior probability should creep up from 80%.
Basically it depends on the source of uncertainty. If all the uncertainty is in the random variable being modeled (as it is in the die example), adding more forecasts (or models) changes nothing—you still have the same uncertainty. However if part of the uncertainty is in the model itself—there is some model error—then you can reduce this model error by combining different (ideally, independent) models.
Imaging a forecast which says: I think A will win, but I’m uncertain so I will say 80% to A and 20% to B. And there is another, different forecast which says the same thing. If you combine the two, your probability of A should be higher than 80%.
Use the simple Bayesian updating on the evidence. A new, different forecast is a new piece of evidence.
If they were independent, then it would be trivial to update on each of them and arrive at a meta-forecast much greater than 80%. But they’re really not. Many of them are based on the same polls, news, and historical behaviors. They may have different models, but they’re very much not independent forecasts.
I agree. That’s why calculating the “combined” forecast is hard—you need to estimate the degree of co-dependency. But as long as the forecasts are not exactly the same, each new one gets you a (metaphorical) bit of information and your posterior probability should creep up from 80%.
But why is it a piece of evidence pointing to greater than 80% instead of 80%?
Basically it depends on the source of uncertainty. If all the uncertainty is in the random variable being modeled (as it is in the die example), adding more forecasts (or models) changes nothing—you still have the same uncertainty. However if part of the uncertainty is in the model itself—there is some model error—then you can reduce this model error by combining different (ideally, independent) models.
Imaging a forecast which says: I think A will win, but I’m uncertain so I will say 80% to A and 20% to B. And there is another, different forecast which says the same thing. If you combine the two, your probability of A should be higher than 80%.