Maybe. My initial response was to consider editing my wording to redact the implication you’re objecting to.
But, it seems pretty plausible to me that even someone who works hard to be calibrated has many coexisting heuristics with different calibration graphs of their own, so that it’s quite likely that beliefs at the 99.9% level are different in kind from beliefs at the 95% level, because the mental process which would spit out such a high confidence is different. Then, why expect calibration around 99.5%, based on calibration around 95%?
And if someone cites million-to-one confidence, then yeah, something is probably up with that! Maybe they’ve actually got millions of samples to generalize from, but even so, are they generalizing correctly? It seems like there is some reason to expect the graph to dip a bit at the end.
Maybe. My initial response was to consider editing my wording to redact the implication you’re objecting to.
But, it seems pretty plausible to me that even someone who works hard to be calibrated has many coexisting heuristics with different calibration graphs of their own, so that it’s quite likely that beliefs at the 99.9% level are different in kind from beliefs at the 95% level, because the mental process which would spit out such a high confidence is different. Then, why expect calibration around 99.5%, based on calibration around 95%?
And if someone cites million-to-one confidence, then yeah, something is probably up with that! Maybe they’ve actually got millions of samples to generalize from, but even so, are they generalizing correctly? It seems like there is some reason to expect the graph to dip a bit at the end.