Bias is not the only source of errors. It is notoriously hard to come up with probability estimates for rare events, ones that are way out in the tails of the distribution.
Yes, I don’t think calibration training will cause me to be able to figure out the difference between something with a .00005% chance and something with a .000005% chance, but it should be able to make me not estimate something at 5% when logic says the possibility is orders of magnitude below that.
Bias is not the only source of errors. It is notoriously hard to come up with probability estimates for rare events, ones that are way out in the tails of the distribution.
Yes, I don’t think calibration training will cause me to be able to figure out the difference between something with a .00005% chance and something with a .000005% chance, but it should be able to make me not estimate something at 5% when logic says the possibility is orders of magnitude below that.