After starting up PredictionBook, I’ve noticed I’m underconfident at 60% (I get 81% of my 60% predictions right) and underconfident at 70% (only get 44% right).
This is neat… but I’m not quite sure what I’m actually supposed to do. When I’m forming a prediction, often the exact number feels kinda arbitrary. I’m worried that if I try to take into account my under/overconfidence, I’ll end up sort of gaming the system rather than learning anything. (i.e. look for excuses to shove my confidence into a bucket that is currently over/underconfident, rather than actually learning “when I feel X subjectively, that corresponds to X actual confidence.”
Both of them have 15 predictions at this point. Could still be low sample size but seemed enough to be able to start adjusting.
(and, even if it turns out I am actually better calibrated than this and it goes away at larger samples, I’m still interested in the general answer to the question)
After starting up PredictionBook, I’ve noticed I’m underconfident at 60% (I get 81% of my 60% predictions right) and underconfident at 70% (only get 44% right).
This is neat… but I’m not quite sure what I’m actually supposed to do. When I’m forming a prediction, often the exact number feels kinda arbitrary. I’m worried that if I try to take into account my under/overconfidence, I’ll end up sort of gaming the system rather than learning anything. (i.e. look for excuses to shove my confidence into a bucket that is currently over/underconfident, rather than actually learning “when I feel X subjectively, that corresponds to X actual confidence.”
Curious if folk have suggestions.
Sounds like mostly low sample size?
Both of them have 15 predictions at this point. Could still be low sample size but seemed enough to be able to start adjusting.
(and, even if it turns out I am actually better calibrated than this and it goes away at larger samples, I’m still interested in the general answer to the question)
Presumably you mean “overconfident”?
Also, you dropped a parenthesis somewhere.