But calibration training should theoretically should fix these exact issues—I’m going to try to find a better calibration question set that can help me with this.
Bias is not the only source of errors. It is notoriously hard to come up with probability estimates for rare events, ones that are way out in the tails of the distribution.
Yes, I don’t think calibration training will cause me to be able to figure out the difference between something with a .00005% chance and something with a .000005% chance, but it should be able to make me not estimate something at 5% when logic says the possibility is orders of magnitude below that.
We all do, err all but .001% or whatever of us.
But calibration training should theoretically should fix these exact issues—I’m going to try to find a better calibration question set that can help me with this.
I am not sure about that—why do you think so?
Because it’s deliberate practice in debiasing—it’s specifically created to train out those biases/
Edit: To be clear, I’m not sure about it either, but theoretically, that’s what’s supposed to happen.
Bias is not the only source of errors. It is notoriously hard to come up with probability estimates for rare events, ones that are way out in the tails of the distribution.
Yes, I don’t think calibration training will cause me to be able to figure out the difference between something with a .00005% chance and something with a .000005% chance, but it should be able to make me not estimate something at 5% when logic says the possibility is orders of magnitude below that.