Expert estimates of probability are often off by factors of hundreds or thousands. [...] I used to be annoyed when the margin of error was high in a forecasting model that I might put together. Now I view it as perhaps the single most important piece of information that a forecaster provides. When we publish a forecast on FiveThirtyEight, I go to great lengths to document the uncertainty attached to it, even if the uncertainty is sufficiently large that the forecast won’t make for punchy headlines.
One might expect it [our gut-feel sense] to be especially bad in the case of presidential primaries. There have been only about 15 competitive nomination contests since we began picking presidents this way in 1972. Some of them — like the nominations of George McGovern in 1972 and Jimmy Carter in 1976 — are dismissed by experts if their outcomes did not happen to agree with their paradigm of how presidents are chosen. (Another fundamental error: when you have such little data, you should almost never throw any of it out, and you should be especially wary of doing so when it happens to contradict your hypothesis.)
Nate Silver
From the same post: