I agree that at 1000004:1000000, you’re looking at the wrong hypothesis. But in the above example, 104:100, you’re looking at the wrong hypothesis too. It’s just that a factor of 10,000x makes it easier to spot. In fact, at 34:30 or even a fewer number of iterations, you’re probably also getting the wrong hypothesis.
A single percentage point of doubt gets blown up and multiplied, but that percentage point has to come from somewhere. It can’t just spring forth from nothingness once you get to past 50 iterations. That means you can’t be 96.6264% certain at the start, but just a little lower (Eliezer’s pre-rounding certainty).
The real question in my mind is when that 1% of doubt actually becomes a significant 5%->10%->20% that something’s wrong. 8:4 feels fine. 104:100 feels overwhelming. But how much doubt am I supposed to feel at 10:6 or at 18:14?
How do you even calculate that if there’s no allowance in the original problem?
There should always, really, be “allowance in the original problem”. Perhaps not explicitly factored in, but you should assign some nonzero probability to possibilities like “the experimenter lied to me”, “I goofed in some crazy way”, “I am being deceived by malevolent demons”, etc. In practice, these wacky hypotheses may not occur to you until the evidence for them starts getting large, and you can decide at that point what prior probabilities you should have put on them. (Unfortunately it’s easy to do that wrongly, e.g. because of hindsight bias.)
As Douglas_Knight says, frequentist statistics is full of tests that will tell you when some otherwise plausible hypothesis (e.g., “these two samples are drawn from things with the same probability distribution”) are incompatible with the data in particular (or not-so-particular) ways.
Maybe I should back up a bit.
I agree that at 1000004:1000000, you’re looking at the wrong hypothesis. But in the above example, 104:100, you’re looking at the wrong hypothesis too. It’s just that a factor of 10,000x makes it easier to spot. In fact, at 34:30 or even a fewer number of iterations, you’re probably also getting the wrong hypothesis.
A single percentage point of doubt gets blown up and multiplied, but that percentage point has to come from somewhere. It can’t just spring forth from nothingness once you get to past 50 iterations. That means you can’t be 96.6264% certain at the start, but just a little lower (Eliezer’s pre-rounding certainty).
The real question in my mind is when that 1% of doubt actually becomes a significant 5%->10%->20% that something’s wrong. 8:4 feels fine. 104:100 feels overwhelming. But how much doubt am I supposed to feel at 10:6 or at 18:14?
How do you even calculate that if there’s no allowance in the original problem?
There should always, really, be “allowance in the original problem”. Perhaps not explicitly factored in, but you should assign some nonzero probability to possibilities like “the experimenter lied to me”, “I goofed in some crazy way”, “I am being deceived by malevolent demons”, etc. In practice, these wacky hypotheses may not occur to you until the evidence for them starts getting large, and you can decide at that point what prior probabilities you should have put on them. (Unfortunately it’s easy to do that wrongly, e.g. because of hindsight bias.)
As Douglas_Knight says, frequentist statistics is full of tests that will tell you when some otherwise plausible hypothesis (e.g., “these two samples are drawn from things with the same probability distribution”) are incompatible with the data in particular (or not-so-particular) ways.
Frequentist tests are good here.