I just ran into this post when searching for Calibration posts, and, I think this is great, good job working on this new skill and I appreciated hearing how it went for you. :)
A thing I discovered when I first got serious about logging lots of predictionbook predictions was sort of the opposite of yours: almost all of my predictions at difference probabilities turned out to be right either 30% or 70% of the time. (if an event was interesting enough to make a prediction about, and I thought the odds of something were 50% or more, it basically happened 70% of the time no matter what probability I put)
I’m not sure if this is still true, now that I’ve gotten a few years more experience (and somewhat integrated the 30%/70% heuristic)
I just ran into this post when searching for Calibration posts, and, I think this is great, good job working on this new skill and I appreciated hearing how it went for you. :)
A thing I discovered when I first got serious about logging lots of predictionbook predictions was sort of the opposite of yours: almost all of my predictions at difference probabilities turned out to be right either 30% or 70% of the time. (if an event was interesting enough to make a prediction about, and I thought the odds of something were 50% or more, it basically happened 70% of the time no matter what probability I put)
I’m not sure if this is still true, now that I’ve gotten a few years more experience (and somewhat integrated the 30%/70% heuristic)