It seems like a reasonable heuristic that small probabilities are also likely to be uncertain probabilities (due to being associated with rare events and therefore limited numbers of observations).
The occurrence of very low probability events is also indicative of unaccounted for structural uncertainty. Taking into account both where I find myself in the multiverse as well as thinking seriously about anthropic reasoning led to me being really confused (and I still am, but less so). I think it was good that I became confused and didn’t just think “Oh, according to my model, a really low probability event just happened to me, how cool is that?” It wouldn’t surprise me all that much if there was a basic evolutionary adaptation not to trust one’s models after heavily unanticipated events, and this may generalize to being distrustful of small probabilities in general. (But I’m postulating an evolutionary adaptation for rationality based on almost no evidence, which is most often a byproduct of thinking “What would I do if I was evolution?”, which is quite the fallacy.)
The occurrence of very low probability events is also indicative of unaccounted for structural uncertainty. Taking into account both where I find myself in the multiverse as well as thinking seriously about anthropic reasoning led to me being really confused (and I still am, but less so). I think it was good that I became confused and didn’t just think “Oh, according to my model, a really low probability event just happened to me, how cool is that?” It wouldn’t surprise me all that much if there was a basic evolutionary adaptation not to trust one’s models after heavily unanticipated events, and this may generalize to being distrustful of small probabilities in general. (But I’m postulating an evolutionary adaptation for rationality based on almost no evidence, which is most often a byproduct of thinking “What would I do if I was evolution?”, which is quite the fallacy.)