As several people have asked about my intentions in posing these problems, I’ll answer here.
What I was interested in was seeing how people deal with extreme probabilities.
Some people have in the past expressed the view on LW that it is not humanly possible to be justifiably 80 decibans sure of anything. You would have to able to be right about it with an error rate of no more than 1 in 100 million. Who can be right that often about anything? Surely, some would say, it must remain more likely that you’re dreaming, or hypnotised, or being trolled by the Matrix Lords, or something else that you haven’t even thought of, for who can scour out every last hundred millionth of possibility space? And yet, ordinary people, who have never learned to believe that it is impossible, have no difficulty in collecting the Euromillions jackpot, which has approximately those odds against. If they are as sure afterwards that they have won as they would have been sure before that they would not, that’s a swing of 160 decibans.
BTW, that was the lottery I had in mind in composing the example, and is not a fly-by-night operation. I might have sharpened the example by adding that. Someone wins the Euromillions jackpot every few weeks, for a prize of 10 to above 100 million pounds, depending on how many weeks it has rolled over.
The current consensus in the comments, though, is that the evidence of the house keys is strong enough that the posterior certainty that I have them is not perceptibly swayed by methodological flaws gross enough to completely discredit any paper that relied on statistical techniques to support its claims, and that I can be justifiably sure I have won the lottery at least by the time my bank confirms receipt of the money. These are my own views too.
“0 and 1 are not probabilities”, people still say here from time to time, yet a lot of everyday life runs well enough on 0s and 1s.
I think a lot of probabilistic and behavioral reasoning starts to break down and act strangely in the presence of very large odds ratios.
For example, if I discover that I have won the lottery, how should I estimate the probability that I am hallucinating, or dreaming, or insane? In the first case, I cannot trust the evidence of my senses, but I can still reason about that evidence, so I should at least be able to work out a P(hallucination). In the second case, my memory and reasoning faculties are probably significantly impaired, BUT any actions I take will actually have no effect on the world, so I should consider this case when computing questions about truth, but IGNORE it when computing questions about action. In the third case, it’s likely that I can’t even reason coherently, so it’s not clear how to weigh this state at all. Conditional on being in it, my reasoning is questionable; conditional on my being able to reason about probabilities, I’m very likely (how likely?) not in it; therefore when reasoning about how to behave, I should probably discount it by what seems to be a sort of anthropic reasoning.
So whatever the probabilities are that I can’t trust my senses / that I can’t trust my own reasoning abilities, it’s going to be very hard for me to reason directly about probabilities more extreme than that in many cases.
As several people have asked about my intentions in posing these problems, I’ll answer here.
What I was interested in was seeing how people deal with extreme probabilities.
Some people have in the past expressed the view on LW that it is not humanly possible to be justifiably 80 decibans sure of anything. You would have to able to be right about it with an error rate of no more than 1 in 100 million. Who can be right that often about anything? Surely, some would say, it must remain more likely that you’re dreaming, or hypnotised, or being trolled by the Matrix Lords, or something else that you haven’t even thought of, for who can scour out every last hundred millionth of possibility space? And yet, ordinary people, who have never learned to believe that it is impossible, have no difficulty in collecting the Euromillions jackpot, which has approximately those odds against. If they are as sure afterwards that they have won as they would have been sure before that they would not, that’s a swing of 160 decibans.
BTW, that was the lottery I had in mind in composing the example, and is not a fly-by-night operation. I might have sharpened the example by adding that. Someone wins the Euromillions jackpot every few weeks, for a prize of 10 to above 100 million pounds, depending on how many weeks it has rolled over.
The current consensus in the comments, though, is that the evidence of the house keys is strong enough that the posterior certainty that I have them is not perceptibly swayed by methodological flaws gross enough to completely discredit any paper that relied on statistical techniques to support its claims, and that I can be justifiably sure I have won the lottery at least by the time my bank confirms receipt of the money. These are my own views too.
“0 and 1 are not probabilities”, people still say here from time to time, yet a lot of everyday life runs well enough on 0s and 1s.
I think a lot of probabilistic and behavioral reasoning starts to break down and act strangely in the presence of very large odds ratios.
For example, if I discover that I have won the lottery, how should I estimate the probability that I am hallucinating, or dreaming, or insane? In the first case, I cannot trust the evidence of my senses, but I can still reason about that evidence, so I should at least be able to work out a P(hallucination). In the second case, my memory and reasoning faculties are probably significantly impaired, BUT any actions I take will actually have no effect on the world, so I should consider this case when computing questions about truth, but IGNORE it when computing questions about action. In the third case, it’s likely that I can’t even reason coherently, so it’s not clear how to weigh this state at all. Conditional on being in it, my reasoning is questionable; conditional on my being able to reason about probabilities, I’m very likely (how likely?) not in it; therefore when reasoning about how to behave, I should probably discount it by what seems to be a sort of anthropic reasoning.
So whatever the probabilities are that I can’t trust my senses / that I can’t trust my own reasoning abilities, it’s going to be very hard for me to reason directly about probabilities more extreme than that in many cases.