I think a lot of probabilistic and behavioral reasoning starts to break down and act strangely in the presence of very large odds ratios.
For example, if I discover that I have won the lottery, how should I estimate the probability that I am hallucinating, or dreaming, or insane? In the first case, I cannot trust the evidence of my senses, but I can still reason about that evidence, so I should at least be able to work out a P(hallucination). In the second case, my memory and reasoning faculties are probably significantly impaired, BUT any actions I take will actually have no effect on the world, so I should consider this case when computing questions about truth, but IGNORE it when computing questions about action. In the third case, it’s likely that I can’t even reason coherently, so it’s not clear how to weigh this state at all. Conditional on being in it, my reasoning is questionable; conditional on my being able to reason about probabilities, I’m very likely (how likely?) not in it; therefore when reasoning about how to behave, I should probably discount it by what seems to be a sort of anthropic reasoning.
So whatever the probabilities are that I can’t trust my senses / that I can’t trust my own reasoning abilities, it’s going to be very hard for me to reason directly about probabilities more extreme than that in many cases.
I think a lot of probabilistic and behavioral reasoning starts to break down and act strangely in the presence of very large odds ratios.
For example, if I discover that I have won the lottery, how should I estimate the probability that I am hallucinating, or dreaming, or insane? In the first case, I cannot trust the evidence of my senses, but I can still reason about that evidence, so I should at least be able to work out a P(hallucination). In the second case, my memory and reasoning faculties are probably significantly impaired, BUT any actions I take will actually have no effect on the world, so I should consider this case when computing questions about truth, but IGNORE it when computing questions about action. In the third case, it’s likely that I can’t even reason coherently, so it’s not clear how to weigh this state at all. Conditional on being in it, my reasoning is questionable; conditional on my being able to reason about probabilities, I’m very likely (how likely?) not in it; therefore when reasoning about how to behave, I should probably discount it by what seems to be a sort of anthropic reasoning.
So whatever the probabilities are that I can’t trust my senses / that I can’t trust my own reasoning abilities, it’s going to be very hard for me to reason directly about probabilities more extreme than that in many cases.