When Nate Silver of FiveThirtyEight.com used Bayes to predict results of the November 2008 race, he correctly predicted the winner in 49 states, an unmatched record among pollsters.
Intrade got it equally right, and to be honest there’s nothing particularly “Bayesian” about Nate Silver’s methodology. It’s just intelligently weighted average of polling data.
I think the premise is that, if you are weighting the importance of polls based on how well the polls predicted past elections, you are using the spirit of Bayes, and the only consistent and correct way to do it mathematically is some form of Bayes itself.
For sample size, it’s actually objectively measurable. For recency etc. you can just use your expert judgment and validate against data with ad hoc techniques.
Ask Nate Silver for details if you wish. He never indicated he has a big Bayesian model behind all that.
You reach a point very early where model uncertainty makes Bayesian methods no better than ad hoc methods.
I don’t mean to argue that Nate Silver had a “big Bayesian model behind all that.” But if sample size and recency increase the reliability of polls, you can objectively measure how much they do and it seems that using Bayesian methods you could create an objectively best prior weighting system, which seems like the point that Vaniver was making.
I’m not immediately familiar with the math but it seems odd to me that it would be much more work to do a regression for a “best prior” than to come up with an ad hoc method, especially considering that “expert judgment” tends to be really bad (at least according to Bishop and Trout ).
...there’s nothing particularly “Bayesian” about Nate Silver’s methodology. It’s just intelligently weighted average of polling data.
Theories describing reality at a deep level have problems such as unclear intellectual ownership of intelligent methods when the methods aren’t clearly inspired by the theoretical tradition. It’s a good problem to have.
Intrade got it equally right, and to be honest there’s nothing particularly “Bayesian” about Nate Silver’s methodology. It’s just intelligently weighted average of polling data.
I think the premise is that, if you are weighting the importance of polls based on how well the polls predicted past elections, you are using the spirit of Bayes, and the only consistent and correct way to do it mathematically is some form of Bayes itself.
IIRC his weights were based on objective quality metrics like sample size and recency.
When you say “objective quality metrics,” how can they be determined to be such without using prior knowledge?
For sample size, it’s actually objectively measurable. For recency etc. you can just use your expert judgment and validate against data with ad hoc techniques.
Ask Nate Silver for details if you wish. He never indicated he has a big Bayesian model behind all that.
You reach a point very early where model uncertainty makes Bayesian methods no better than ad hoc methods.
I don’t mean to argue that Nate Silver had a “big Bayesian model behind all that.” But if sample size and recency increase the reliability of polls, you can objectively measure how much they do and it seems that using Bayesian methods you could create an objectively best prior weighting system, which seems like the point that Vaniver was making.
I’m not immediately familiar with the math but it seems odd to me that it would be much more work to do a regression for a “best prior” than to come up with an ad hoc method, especially considering that “expert judgment” tends to be really bad (at least according to Bishop and Trout ).
Of course, I should probably wait to disagree until he [Nate Silver] gets something wrong.
Theories describing reality at a deep level have problems such as unclear intellectual ownership of intelligent methods when the methods aren’t clearly inspired by the theoretical tradition. It’s a good problem to have.