I like Roko’s suggestion that we should look at how many doomsayers actually predicted a danger (and how early). We should also look at how many dangers occurred with no prediction at all (the Cameroon lake eruptions come to mind).
Overall, the human error rate is pretty high: http://panko.shidler.hawaii.edu/HumanErr/
Getting the error rate under 0.5% per statement/action seems very unlikely, unless one deliberately puts it into a system that forces several iterations of checking and correction (Panko’s data suggests that error checking typically finds about 80% of the errors). For scientific papers/arguments one bad per thousand is probably conservative (My friend Mikael claimed the number of erroneous maths papers are far less than this level because of the peculiarities of the field, but I wonder how many orders of magnitude they can buy).
At least to me this seems to suggest that in the absence of any other evidence, assigning a prior probability much less than 1/1000 to any event we regard as extremely unlikely is overconfident. Of course, as soon as we have a bit of evidence (cosmic rays, knowledge of physics) we can start using smaller priors. But uninformative priors are always going to be odd and silly.
I like Roko’s suggestion that we should look at how many doomsayers actually predicted a danger (and how early). We should also look at how many dangers occurred with no prediction at all (the Cameroon lake eruptions come to mind).
Overall, the human error rate is pretty high: http://panko.shidler.hawaii.edu/HumanErr/ Getting the error rate under 0.5% per statement/action seems very unlikely, unless one deliberately puts it into a system that forces several iterations of checking and correction (Panko’s data suggests that error checking typically finds about 80% of the errors). For scientific papers/arguments one bad per thousand is probably conservative (My friend Mikael claimed the number of erroneous maths papers are far less than this level because of the peculiarities of the field, but I wonder how many orders of magnitude they can buy).
At least to me this seems to suggest that in the absence of any other evidence, assigning a prior probability much less than 1/1000 to any event we regard as extremely unlikely is overconfident. Of course, as soon as we have a bit of evidence (cosmic rays, knowledge of physics) we can start using smaller priors. But uninformative priors are always going to be odd and silly.