I did the equivalent bet test, and came up with about 5%. I suspect that due to the problems I’ve done calibration training on, I have a very hard time working with extremely low probabilities.
Where did you do your calibration training?
On prediction book I think most people would put 0% in the box for Zoltan getting elected in the next election.
But calibration training should theoretically should fix these exact issues—I’m going to try to find a better calibration question set that can help me with this.
Bias is not the only source of errors. It is notoriously hard to come up with probability estimates for rare events, ones that are way out in the tails of the distribution.
Yes, I don’t think calibration training will cause me to be able to figure out the difference between something with a .00005% chance and something with a .000005% chance, but it should be able to make me not estimate something at 5% when logic says the possibility is orders of magnitude below that.
Why did you pick 5%? That number seems very high for me.
I did the equivalent bet test, and came up with about 5%. I suspect that due to the problems I’ve done calibration training on, I have a very hard time working with extremely low probabilities.
Where did you do your calibration training? On prediction book I think most people would put 0% in the box for Zoltan getting elected in the next election.
I’ve used prediction book rarely, I mostly use the calibration game and the updating game.
What do you mean with “updating game”?
http://rationality.org/apps/
The page lists the calibration game with a link but lists no link for the updating game. Is the updating game something that CFAR uses internally?
http://www.patheos.com/blogs/unequallyyoked/2012/07/play-along-with-rationality-camp-at-home.html has a link
edit: https://groups.google.com/forum/#!topic/lesswrongslc/DuWDe_km88w has more links. They seem to be malformed by google, but manually fixing them works.
Mac: https://dl.dropbox.com/u/30954211/RationalityGames/UpdatingGame%28Mac%29.app.zip Android: https://dl.dropbox.com/u/30954211/RationalityGames/UpdatingGame%28And%29.apk
I actually can’t recall how I got the updating game… I believe it’s on the android store somewhere, but really hard to find.
We all do, err all but .001% or whatever of us.
But calibration training should theoretically should fix these exact issues—I’m going to try to find a better calibration question set that can help me with this.
I am not sure about that—why do you think so?
Because it’s deliberate practice in debiasing—it’s specifically created to train out those biases/
Edit: To be clear, I’m not sure about it either, but theoretically, that’s what’s supposed to happen.
Bias is not the only source of errors. It is notoriously hard to come up with probability estimates for rare events, ones that are way out in the tails of the distribution.
Yes, I don’t think calibration training will cause me to be able to figure out the difference between something with a .00005% chance and something with a .000005% chance, but it should be able to make me not estimate something at 5% when logic says the possibility is orders of magnitude below that.