Hmm… I have an idea regarding this, and also regarding Roko’s suggestion to disregard low probabilities.
If you are generally unable to estimate probabilities of events lower than, say, 1/1000, it means that you must calibrate the estimates for these events way down, below 1/1000.
There are very many things that you’ll only be able to estimate as “probability below 1/1000”, some of them mutually exclusive. Normalization requires keeping the sum of their probabilities below unity, so the estimate must actually be tuned down. As a result, you can’t insist that there are parts of the distribution resulting from uncertain estimate that are sufficiently high to matter, and generally should treat things falling in this class as way less probable than the class suggests.
Hmm… I have an idea regarding this, and also regarding Roko’s suggestion to disregard low probabilities.
If you are generally unable to estimate probabilities of events lower than, say, 1/1000, it means that you must calibrate the estimates for these events way down, below 1/1000.
There are very many things that you’ll only be able to estimate as “probability below 1/1000”, some of them mutually exclusive. Normalization requires keeping the sum of their probabilities below unity, so the estimate must actually be tuned down. As a result, you can’t insist that there are parts of the distribution resulting from uncertain estimate that are sufficiently high to matter, and generally should treat things falling in this class as way less probable than the class suggests.