I am considerably more skeptical of op-eds and other punditry, after tracking the rare clear predictions they made
In the case of a few well-studied pundits you should examine the evidence gathered by other prediction trackers. Some pundits are well outside the dumb luck range on a ten-point scale:
The best? Paul Krugman with a PVS of 8.2 (You can see a screenshot of his score sheet to the right. Note: Score sheets for each of the pundits are in the full text document).
The worst? Cal Thomas, with a PVS of −8.7 (You read that right. Negative eight point seven...).
Kinda surprising to me that you can beat dumb luck in inaccuracy. I hope they do a followup.
Since the study focused on the period around the 2008 elections, which the Democrats won on nearly all levels, and since most pundits tend to be biased towards believing that what they wish would happen will happen, it’s not surprising that liberals’ predictions did better and some conservatives scored worse than random. I suspect we’d see the trend go the other way for say predictions about the 2010 midterms. The fundamental problem is that the predictions weren’t independent.
Since the correlation between liberalism and correctness was weak, most pundits probably wouldn’t gain or lose much score in a more politically-average year. In Krugman’s case, for example, most of the scored predictions were economic not political forecasts. In Cal Thomas’s case however, your explanation might basically work.
True, of course in Krugman’s case I suspect most of his predictions amounted to predicting that the financial crisis was going to be really but, and thus were also correlated.
Another LW discussion of Krugman’s alleged accuracy pointed both here and to a spreadsheet with the actual predictions. About half of his predictions did indeed amount to saying that the financial crisis was going to be really bad. There were some political ones too but they weren’t of the “my team will win” form, and he did well on those as well.
In particular, one should be skeptical of having lots of people who consistently do worse than average.
I think, though, that it would, in fact, be worthwhile to do the analysis combining 2008 and 2010. I think Paul Krugman had already started panicking by then.
More interesting might be to see how much data it takes for prediction markets to beat most/all pundits.
I would expect Krugman to suffer penalties over the last few years; I don’t read him very much, but he seems to have gotten much more partisan and inaccurate as time passes.
In particular, one should be skeptical of having lots of people who consistently do worse than average.
Outliers? That’s actually what I would expect. People with superior prediction skills can become significantly positive. The same people could use their information backwards to become significantly negative but it is damn hard to reliably lose to a vaguely efficient market significantly if you are stupid (or uninformed).
Sorry, I should have said “worse than random”. To do worse than random, one would have to take a source of good predictions and twist it into a source of bad ones. The only plausible explanation I could think of for this is that you know a group of people who are good at predicting and habitually disagree with them. It seems like there should be far less such people than there are legitimate good predictors.
It’s easy to lose to an efficient market if you’re not playing the efficient market’s games. If you take your stated probability and the market’s implied probability and make a bet somewhere in between, you are likely to lose money over time.
We are in complete agreement, and I should have been explicit and said I was refining a detail on an approximately valid point!
It seems like there should be far less such people than there are legitimate good predictors.
And it seems like those that do exist should have less money to be betting on markets! If not then it would seem like the other group is making some darn poor strategic predictions regarding the rest of their life choices!
It’s easy to lose to an efficient market if you’re not playing the efficient market’s games.
Yes, like it is easy for a thief to get all my jewelry if I break into his house and put it on the table. Which I suppose is the sort of thing they do on Burn Notice to frame the bad guys for crimes. Which makes me wonder if it would be possible to frame someone for, say, insider trading or industrial espionage by losing money to someone such that their windfall is suspicious.
that you know a group of people who are good at predicting and habitually disagree with them.
It seems to me that this is exactly the sort of thing that can really happen in politics. Suppose you have two political parties, the Greens and the Blues, and that for historical reasons it happens that the Greens have adopted some ways of thinking that actually work well, and the Blues make it their practice to disagree with everything distinctive that the Greens say.
(And it could easily happen that there are more Blues than Greens, in which case you’d get lots of systematically bad predictors.)
Yes, I remember that study—it wasn’t as long term as I would like, and I always wonder about the quality of a study conducted by students, but it was interesting anyway.
The last time I cited this study, I remember that their sample size was well under thirty for each of their pundits. At that level, what’s the point of statistics?
Kinda surprising to me that you can beat dumb luck in inaccuracy.
It shouldn’t be. Assume that your pundits in general do no better than chance. In a large sample, some of them are going to have to have to do really badly. Even if your pool on average is better than chance one should still expect a few much worse.
That said, even given that, −8.7 by their metric looks really badly.
According to that study, being a lawyer by training was one of the things that caused predictors to do badly. Note that Cal Thomas doesn’t fall into that category.
In the case of a few well-studied pundits you should examine the evidence gathered by other prediction trackers. Some pundits are well outside the dumb luck range on a ten-point scale:
Kinda surprising to me that you can beat dumb luck in inaccuracy. I hope they do a followup.
Since the study focused on the period around the 2008 elections, which the Democrats won on nearly all levels, and since most pundits tend to be biased towards believing that what they wish would happen will happen, it’s not surprising that liberals’ predictions did better and some conservatives scored worse than random. I suspect we’d see the trend go the other way for say predictions about the 2010 midterms. The fundamental problem is that the predictions weren’t independent.
Since the correlation between liberalism and correctness was weak, most pundits probably wouldn’t gain or lose much score in a more politically-average year. In Krugman’s case, for example, most of the scored predictions were economic not political forecasts. In Cal Thomas’s case however, your explanation might basically work.
True, of course in Krugman’s case I suspect most of his predictions amounted to predicting that the financial crisis was going to be really but, and thus were also correlated.
Another LW discussion of Krugman’s alleged accuracy pointed both here and to a spreadsheet with the actual predictions. About half of his predictions did indeed amount to saying that the financial crisis was going to be really bad. There were some political ones too but they weren’t of the “my team will win” form, and he did well on those as well.
In particular, one should be skeptical of having lots of people who consistently do worse than average.
I think, though, that it would, in fact, be worthwhile to do the analysis combining 2008 and 2010. I think Paul Krugman had already started panicking by then.
More interesting might be to see how much data it takes for prediction markets to beat most/all pundits.
I would expect Krugman to suffer penalties over the last few years; I don’t read him very much, but he seems to have gotten much more partisan and inaccurate as time passes.
Outliers? That’s actually what I would expect. People with superior prediction skills can become significantly positive. The same people could use their information backwards to become significantly negative but it is damn hard to reliably lose to a vaguely efficient market significantly if you are stupid (or uninformed).
Sorry, I should have said “worse than random”. To do worse than random, one would have to take a source of good predictions and twist it into a source of bad ones. The only plausible explanation I could think of for this is that you know a group of people who are good at predicting and habitually disagree with them. It seems like there should be far less such people than there are legitimate good predictors.
It’s easy to lose to an efficient market if you’re not playing the efficient market’s games. If you take your stated probability and the market’s implied probability and make a bet somewhere in between, you are likely to lose money over time.
We are in complete agreement, and I should have been explicit and said I was refining a detail on an approximately valid point!
And it seems like those that do exist should have less money to be betting on markets! If not then it would seem like the other group is making some darn poor strategic predictions regarding the rest of their life choices!
Yes, like it is easy for a thief to get all my jewelry if I break into his house and put it on the table. Which I suppose is the sort of thing they do on Burn Notice to frame the bad guys for crimes. Which makes me wonder if it would be possible to frame someone for, say, insider trading or industrial espionage by losing money to someone such that their windfall is suspicious.
My point is that you’re losing in a context of prediction accuracy, not losing money.
It seems to me that this is exactly the sort of thing that can really happen in politics. Suppose you have two political parties, the Greens and the Blues, and that for historical reasons it happens that the Greens have adopted some ways of thinking that actually work well, and the Blues make it their practice to disagree with everything distinctive that the Greens say.
(And it could easily happen that there are more Blues than Greens, in which case you’d get lots of systematically bad predictors.)
Yes, I remember that study—it wasn’t as long term as I would like, and I always wonder about the quality of a study conducted by students, but it was interesting anyway.
The last time I cited this study, I remember that their sample size was well under thirty for each of their pundits. At that level, what’s the point of statistics?
If the effect size is large enough, 30 observations is plenty & enough to do stats on. Go through a power calculation sometime with, say, d=0.7.
It shouldn’t be. Assume that your pundits in general do no better than chance. In a large sample, some of them are going to have to have to do really badly. Even if your pool on average is better than chance one should still expect a few much worse.
That said, even given that, −8.7 by their metric looks really badly.
According to that study, being a lawyer by training was one of the things that caused predictors to do badly. Note that Cal Thomas doesn’t fall into that category.