Yes, and the paper had several other big problems. For example, it didn’t treat mild belief and certainty differently; someone who suspected Hilary might be the Democratic Nominee was treated as harshly as someone who was 100% sure the Danish were going to invade.
Worse, people get marked down for making conditional predictions whose antecedent was not satisfied! And then they have the audacity to claim that they’ve discovered that making conditional predictions predicts low accuracy.
They also penalise people for hedging, yet surely a hedged prediction is better than no prediction at all?
it didn’t treat mild belief and certainty differently;
It did. Per the paper, the confidences of the predictions were rated on a scale from 1 to 5, where 1 is “No chance of occurring” and 5 is “Definitely will occur”. They didn’t use this in their top-level rankings because they felt it was “accurate enough” without that, but they did use it in their regressions.
Worse, people get marked down for making conditional predictions whose antecedent was not satisfied!
They did not. Per the paper, those were simply thrown out (as people do on PredictionBook).
They also penalise people for hedging, yet surely a hedged prediction is better than no prediction at all?
I agree here, mostly. Looking through the predictions they’ve marked as hedging, some seem like sophistry but some seem like reasonable expressions of uncertainty; if they couldn’t figure out how to properly score them they should have just left them out.
If you think you can improve on their methodology, the full dataset is here: .xls.
Incidentally, the best way to make conditional predictions is to convert them to explicit disjunctions. For example, in November I wanted to predict that “If Mitt Romney loses the primary election, Barack Obama will win the general election.” This is actually logically equivalent to “Either Mitt Romney or Barack Obama will win the 2012 Presidential Election,” barring some very unlikely events, so I posted that instead, and so I won’t have to withdraw the prediction when Romney wins the primary.
While that may be best with current PB, I think conditional predictions are useful.
If you are only interested in truth values and not the strength of the prediction, then it is logically equivalent, but the number of points you get is not the same. The purpose of a conditional probability is to take a conditional risk. If Romney is nominated, you get a gratuitous point for this prediction. Of course, simply counting predictions is easy to game, which is why we like to indicate the strength of the prediction, as you do with this one on PB. But turning a conditional prediction into an absolute prediction changes its probability and thus its effect on your calibration score. To a certain extent, it amounts to double counting the prediction about the hypothesis.
The first version doesn’t have that part either- he’s predicting that if Romney gets eliminated in the primaries, ie Gingrich, Santorum, or Paul is the Republican nominee, then Obama will win.
it didn’t treat mild belief and certainty differently;
… they did use it in their regressions.
Sure, so we learn about how confidence is correlated with binary accuracy. But they don’t take into account that being very confident and wrong should be penalised more than being slightly confident and wrong.
Worse, people get marked down for making conditional predictions whose antecedent was not satisfied! And then they have the audacity to claim that they’ve discovered that making conditional predictions predicts low accuracy.
Why do you think this? Doesn’t seem true at all to me.
Looking at the spreadsheet there are many judgements left blank with the phrase “conditional not met.” They are not counted in the total number of predictions.
Yes, and the paper had several other big problems. For example, it didn’t treat mild belief and certainty differently; someone who suspected Hilary might be the Democratic Nominee was treated as harshly as someone who was 100% sure the Danish were going to invade.
Worse, people get marked down for making conditional predictions whose antecedent was not satisfied! And then they have the audacity to claim that they’ve discovered that making conditional predictions predicts low accuracy.
They also penalise people for hedging, yet surely a hedged prediction is better than no prediction at all?
It did. Per the paper, the confidences of the predictions were rated on a scale from 1 to 5, where 1 is “No chance of occurring” and 5 is “Definitely will occur”. They didn’t use this in their top-level rankings because they felt it was “accurate enough” without that, but they did use it in their regressions.
They did not. Per the paper, those were simply thrown out (as people do on PredictionBook).
I agree here, mostly. Looking through the predictions they’ve marked as hedging, some seem like sophistry but some seem like reasonable expressions of uncertainty; if they couldn’t figure out how to properly score them they should have just left them out.
If you think you can improve on their methodology, the full dataset is here: .xls.
Incidentally, the best way to make conditional predictions is to convert them to explicit disjunctions. For example, in November I wanted to predict that “If Mitt Romney loses the primary election, Barack Obama will win the general election.” This is actually logically equivalent to “Either Mitt Romney or Barack Obama will win the 2012 Presidential Election,” barring some very unlikely events, so I posted that instead, and so I won’t have to withdraw the prediction when Romney wins the primary.
While that may be best with current PB, I think conditional predictions are useful.
If you are only interested in truth values and not the strength of the prediction, then it is logically equivalent, but the number of points you get is not the same. The purpose of a conditional probability is to take a conditional risk. If Romney is nominated, you get a gratuitous point for this prediction. Of course, simply counting predictions is easy to game, which is why we like to indicate the strength of the prediction, as you do with this one on PB. But turning a conditional prediction into an absolute prediction changes its probability and thus its effect on your calibration score. To a certain extent, it amounts to double counting the prediction about the hypothesis.
This is less specific than the first prediction. The second version loses the part where you predict obama will beat romney
The first version doesn’t have that part either- he’s predicting that if Romney gets eliminated in the primaries, ie Gingrich, Santorum, or Paul is the Republican nominee, then Obama will win.
you’re right, I misread.
Sure, so we learn about how confidence is correlated with binary accuracy. But they don’t take into account that being very confident and wrong should be penalised more than being slightly confident and wrong.
I misread; you are right
That made me giggle.
Why do you think this? Doesn’t seem true at all to me. Looking at the spreadsheet there are many judgements left blank with the phrase “conditional not met.” They are not counted in the total number of predictions.