Since the study focused on the period around the 2008 elections, which the Democrats won on nearly all levels, and since most pundits tend to be biased towards believing that what they wish would happen will happen, it’s not surprising that liberals’ predictions did better and some conservatives scored worse than random. I suspect we’d see the trend go the other way for say predictions about the 2010 midterms. The fundamental problem is that the predictions weren’t independent.
Since the correlation between liberalism and correctness was weak, most pundits probably wouldn’t gain or lose much score in a more politically-average year. In Krugman’s case, for example, most of the scored predictions were economic not political forecasts. In Cal Thomas’s case however, your explanation might basically work.
True, of course in Krugman’s case I suspect most of his predictions amounted to predicting that the financial crisis was going to be really but, and thus were also correlated.
Another LW discussion of Krugman’s alleged accuracy pointed both here and to a spreadsheet with the actual predictions. About half of his predictions did indeed amount to saying that the financial crisis was going to be really bad. There were some political ones too but they weren’t of the “my team will win” form, and he did well on those as well.
In particular, one should be skeptical of having lots of people who consistently do worse than average.
I think, though, that it would, in fact, be worthwhile to do the analysis combining 2008 and 2010. I think Paul Krugman had already started panicking by then.
More interesting might be to see how much data it takes for prediction markets to beat most/all pundits.
I would expect Krugman to suffer penalties over the last few years; I don’t read him very much, but he seems to have gotten much more partisan and inaccurate as time passes.
In particular, one should be skeptical of having lots of people who consistently do worse than average.
Outliers? That’s actually what I would expect. People with superior prediction skills can become significantly positive. The same people could use their information backwards to become significantly negative but it is damn hard to reliably lose to a vaguely efficient market significantly if you are stupid (or uninformed).
Sorry, I should have said “worse than random”. To do worse than random, one would have to take a source of good predictions and twist it into a source of bad ones. The only plausible explanation I could think of for this is that you know a group of people who are good at predicting and habitually disagree with them. It seems like there should be far less such people than there are legitimate good predictors.
It’s easy to lose to an efficient market if you’re not playing the efficient market’s games. If you take your stated probability and the market’s implied probability and make a bet somewhere in between, you are likely to lose money over time.
We are in complete agreement, and I should have been explicit and said I was refining a detail on an approximately valid point!
It seems like there should be far less such people than there are legitimate good predictors.
And it seems like those that do exist should have less money to be betting on markets! If not then it would seem like the other group is making some darn poor strategic predictions regarding the rest of their life choices!
It’s easy to lose to an efficient market if you’re not playing the efficient market’s games.
Yes, like it is easy for a thief to get all my jewelry if I break into his house and put it on the table. Which I suppose is the sort of thing they do on Burn Notice to frame the bad guys for crimes. Which makes me wonder if it would be possible to frame someone for, say, insider trading or industrial espionage by losing money to someone such that their windfall is suspicious.
that you know a group of people who are good at predicting and habitually disagree with them.
It seems to me that this is exactly the sort of thing that can really happen in politics. Suppose you have two political parties, the Greens and the Blues, and that for historical reasons it happens that the Greens have adopted some ways of thinking that actually work well, and the Blues make it their practice to disagree with everything distinctive that the Greens say.
(And it could easily happen that there are more Blues than Greens, in which case you’d get lots of systematically bad predictors.)
Since the study focused on the period around the 2008 elections, which the Democrats won on nearly all levels, and since most pundits tend to be biased towards believing that what they wish would happen will happen, it’s not surprising that liberals’ predictions did better and some conservatives scored worse than random. I suspect we’d see the trend go the other way for say predictions about the 2010 midterms. The fundamental problem is that the predictions weren’t independent.
Since the correlation between liberalism and correctness was weak, most pundits probably wouldn’t gain or lose much score in a more politically-average year. In Krugman’s case, for example, most of the scored predictions were economic not political forecasts. In Cal Thomas’s case however, your explanation might basically work.
True, of course in Krugman’s case I suspect most of his predictions amounted to predicting that the financial crisis was going to be really but, and thus were also correlated.
Another LW discussion of Krugman’s alleged accuracy pointed both here and to a spreadsheet with the actual predictions. About half of his predictions did indeed amount to saying that the financial crisis was going to be really bad. There were some political ones too but they weren’t of the “my team will win” form, and he did well on those as well.
In particular, one should be skeptical of having lots of people who consistently do worse than average.
I think, though, that it would, in fact, be worthwhile to do the analysis combining 2008 and 2010. I think Paul Krugman had already started panicking by then.
More interesting might be to see how much data it takes for prediction markets to beat most/all pundits.
I would expect Krugman to suffer penalties over the last few years; I don’t read him very much, but he seems to have gotten much more partisan and inaccurate as time passes.
Outliers? That’s actually what I would expect. People with superior prediction skills can become significantly positive. The same people could use their information backwards to become significantly negative but it is damn hard to reliably lose to a vaguely efficient market significantly if you are stupid (or uninformed).
Sorry, I should have said “worse than random”. To do worse than random, one would have to take a source of good predictions and twist it into a source of bad ones. The only plausible explanation I could think of for this is that you know a group of people who are good at predicting and habitually disagree with them. It seems like there should be far less such people than there are legitimate good predictors.
It’s easy to lose to an efficient market if you’re not playing the efficient market’s games. If you take your stated probability and the market’s implied probability and make a bet somewhere in between, you are likely to lose money over time.
We are in complete agreement, and I should have been explicit and said I was refining a detail on an approximately valid point!
And it seems like those that do exist should have less money to be betting on markets! If not then it would seem like the other group is making some darn poor strategic predictions regarding the rest of their life choices!
Yes, like it is easy for a thief to get all my jewelry if I break into his house and put it on the table. Which I suppose is the sort of thing they do on Burn Notice to frame the bad guys for crimes. Which makes me wonder if it would be possible to frame someone for, say, insider trading or industrial espionage by losing money to someone such that their windfall is suspicious.
My point is that you’re losing in a context of prediction accuracy, not losing money.
It seems to me that this is exactly the sort of thing that can really happen in politics. Suppose you have two political parties, the Greens and the Blues, and that for historical reasons it happens that the Greens have adopted some ways of thinking that actually work well, and the Blues make it their practice to disagree with everything distinctive that the Greens say.
(And it could easily happen that there are more Blues than Greens, in which case you’d get lots of systematically bad predictors.)