Prase: “This reasoning gives the probability 1/1000 for any conceivable minority hypothesis, which is inconsistent.”
Sure; for example if you applied this kind of “rough guestimate” reasoning to, say, 1001 mutually exclusive minority views, you would end up with a probability greater than 1. But I would not apply this reasoning in all cases: there may be some specific cases where I would modify the starting guess, for example if it led to inconsistency.
I think that this illustrates that it is hard to draw hard and fast rules for useful heuristics. I think that you’d agree that assigning a probability of 1⁄200 or 1⁄5000 to the hypothesis that the scientific community is mistaken about the safety of some particular process is a reasonable heuristic to go around with, even if overzealous application of such heuristics leads to inconsistencies. The answer, of course, is not to be overzealous.
And, of course, a better answer than the one I originally gave would be to look into the past history of major disasters that were predicted by some minority view within the scientific community, and get some actual numbers. How many times have a small group of outspoken doomsayers been proven right? How many times not? If I had the time I’d do it. Perhaps this would be a useful exercise for the FHI to undertake.
Prase: “This reasoning gives the probability 1/1000 for any conceivable minority hypothesis, which is inconsistent.”
Sure; for example if you applied this kind of “rough guestimate” reasoning to, say, 1001 mutually exclusive minority views, you would end up with a probability greater than 1. But I would not apply this reasoning in all cases: there may be some specific cases where I would modify the starting guess, for example if it led to inconsistency.
I think that this illustrates that it is hard to draw hard and fast rules for useful heuristics. I think that you’d agree that assigning a probability of 1⁄200 or 1⁄5000 to the hypothesis that the scientific community is mistaken about the safety of some particular process is a reasonable heuristic to go around with, even if overzealous application of such heuristics leads to inconsistencies. The answer, of course, is not to be overzealous.
And, of course, a better answer than the one I originally gave would be to look into the past history of major disasters that were predicted by some minority view within the scientific community, and get some actual numbers. How many times have a small group of outspoken doomsayers been proven right? How many times not? If I had the time I’d do it. Perhaps this would be a useful exercise for the FHI to undertake.