Yes, I don’t think calibration training will cause me to be able to figure out the difference between something with a .00005% chance and something with a .000005% chance, but it should be able to make me not estimate something at 5% when logic says the possibility is orders of magnitude below that.
Yes, I don’t think calibration training will cause me to be able to figure out the difference between something with a .00005% chance and something with a .000005% chance, but it should be able to make me not estimate something at 5% when logic says the possibility is orders of magnitude below that.