When people buy insurance, they often plan for events that are less probable than 1%. The intuitive difficulty here is not that you act on an event with probability of 1%, but that you act on an event where the probability (be it 1% or 10% or 0.1%) is estimated intuitively, so that you have no frequency statistics to rely on, and there remains great uncertainty about the value of the probability.
People fear acting on uncertainty that is about to be resolved, for if it’s resolved not in their favor, they will be faced with wide agreement that in retrospect their action was wrong. Furthermore, if the action is aimed to mitigate an improbable risk, they even expect that the uncertainty will resolve not in their favor. But this consideration doesn’t make the estimated probability any lower, and estimation is the best we have.
The analogy with insurance isn’t exact. One could argue (though I think one would be wrong) that diminishing returns related to bounded utility start setting in on scales larger than the kinds of events people typically insure against, but smaller than whatever fraction of astronomical waste justifies investing in combating 1% existential risk probabilities.
When people buy insurance, they often plan for events that are less probable than 1%. The intuitive difficulty here is not that you act on an event with probability of 1%, but that you act on an event where the probability (be it 1% or 10% or 0.1%) is estimated intuitively, so that you have no frequency statistics to rely on, and there remains great uncertainty about the value of the probability.
People fear acting on uncertainty that is about to be resolved, for if it’s resolved not in their favor, they will be faced with wide agreement that in retrospect their action was wrong. Furthermore, if the action is aimed to mitigate an improbable risk, they even expect that the uncertainty will resolve not in their favor. But this consideration doesn’t make the estimated probability any lower, and estimation is the best we have.
The analogy with insurance isn’t exact. One could argue (though I think one would be wrong) that diminishing returns related to bounded utility start setting in on scales larger than the kinds of events people typically insure against, but smaller than whatever fraction of astronomical waste justifies investing in combating 1% existential risk probabilities.