My prior probability for not-X is very high, and the evidence for X is so weak and scant, it might well just be coincidence, cherry-picking, or a data artifact. In the interest of brevity, I’m going to round this off to “no evidence.” By phrasing it this way, I transmit my confidence to others in order to avoid a stupid debate over a non-issue.
In real terms, this is exactly the form of argument I think is appropriate against claims of homeopathy and psychic powers.
In my original version I justified phrasing it as “no evidence” by saying “The research of Philip Tetlock shows that forecasters achieve better Brier scores when they exaggerate their confidence.” I no longer endorse this.
The research of Philip Tetlock shows that forecasters achieve better Brier scores when they exaggerate their confidence.
They showed that it’s good to extremise the predictions of teams, when combining predictions that agree with each other, but I don’t think that individual forecasters were systematically underconfident.
Here’s my steelman of this abstract argument:
My prior probability for not-X is very high, and the evidence for X is so weak and scant, it might well just be coincidence, cherry-picking, or a data artifact. In the interest of brevity, I’m going to round this off to “no evidence.” By phrasing it this way, I transmit my confidence to others in order to avoid a stupid debate over a non-issue.
In real terms, this is exactly the form of argument I think is appropriate against claims of homeopathy and psychic powers.
In my original version I justified phrasing it as “no evidence” by saying “The research of Philip Tetlock shows that forecasters achieve better Brier scores when they exaggerate their confidence.” I no longer endorse this.
They showed that it’s good to extremise the predictions of teams, when combining predictions that agree with each other, but I don’t think that individual forecasters were systematically underconfident.
Ah, interesting! Thanks for the catch.