I don’t believe anyone can assign meaningfulvery small or very large probabilities in most situations. It is one of my long-running disagreements with people here and on OB.
I don’t know of a unified way of handling extremely small risks, but there are two things that can be helpful. First, as suggested by Marc Stiegler in “David’s Sling”, is to simply recognize explicitly that they are possible, that way if they do occur you can get on with dealing with the problem without also having to fight disbelief that it could have happened at all. Second, different people have different perspectives and interests and will treat different low possibility events differently, this sort of dispersion of views and preparation will help ensure that someone is at least somewhat prepared. As I said, neither of these is really enough, but I simply can’t see any better options.
I don’t believe anyone can assign meaningful very small or very large probabilities in most situations. It is one of my long-running disagreements with people here and on OB.
There are indeed many known human biases of this kind, plus general inability to predict small differences in probability.
But we can’t treat every low probability scenario as being e.g. of p=0.1 or some other constant! What do you suggest then?
I don’t know of a unified way of handling extremely small risks, but there are two things that can be helpful. First, as suggested by Marc Stiegler in “David’s Sling”, is to simply recognize explicitly that they are possible, that way if they do occur you can get on with dealing with the problem without also having to fight disbelief that it could have happened at all. Second, different people have different perspectives and interests and will treat different low possibility events differently, this sort of dispersion of views and preparation will help ensure that someone is at least somewhat prepared. As I said, neither of these is really enough, but I simply can’t see any better options.