Yes. But as far as I can see this isn’t of any particular importance to this discussion. Why do you think it is?
It’s the key of my point, but you’re right that I should clarify the math here. Consider this part:
Actually, a frequentist can just keep collecting more data until they get p<0.05, then declare the null hypothesis to be rejected. No lying or suppression of data required. They can always do this, even if the null hypothesis is true: After collecting n data points, they have a 0.05 chance of seeing p<0.05. If they don’t, they then collect nK more data points, where Kis big enough that whatever happened with the first n data points makes little difference to the p-value, so there’s still about a 0.05 chance that p<0.05. If that doesn’t produce a rejection, they collect nK2 more data points, and so on until they manage to get p<0.05, which is guaranteed to happen eventually with probability 1.
This is true for one hypothesis. It is NOT true if you know the alternative hypothesis. That is to say: suppose you are checking the p-value BOTH for the null hypothesis bias=0.5, AND for the alternate hypothesis bias=0.55. You check both p-values and see which is smaller. Now it is no longer true that you can keep collecting more data until their desired hypothesis wins; if the truth is bias=0.5, then after enough flips, the alternative hypothesis will never win again, and will always have astronomically small p-value.
To repeat: yes, you can disprove bias=0.5 with p<0.05; but at the time this happens, the alternative hypothesis of bias=0.55 might be disproven at p<10^{-100}. You are no longer guaranteed to win when there are two hypotheses rather than one.
But they aren’t guaranteed to eventually get a Bayesian to think the null hypothesis is likely to be false, when it is actually true.
Importantly, this is false! This statement is wrong if you have only one hypothesis rather than two.
More specifically, I claim that if a sequence of coin flip outcomes disproves bias=0.5 at some p-value p, then for the same sequence of coin flips, there exists a bias b such that the likelihood ratio between bias b and bias 0.5 is O(1/p):1. I’m not sure what the exact constant in the big-O notation is (I was trying to calculate it, and I think it’s at most 10). Suppose it’s 10. Then if you have p=0.001, you’ll have likelihood ratio 100:1 for some bias.
Therefore, to get the likelihood ratio as high as you wish, you could employ the following strategy. First, flip coins until the p value is very low, as you described. Then stop, and analyze the sequence of coin flips to determine the special bias b in my claimed theorem above. Then publish a paper claiming “the bias of the coin is b rather than 0.5, here’s my super high likelihood ratio”. This is guaranteed to work (with enough coinflips).
(Generally, if the number of coin flips is N, the bias b will be on the order of 1/2±O(1/√N), so it will be pretty close to 1⁄2; but once again, this is no different for what happens with the frequentist case, because to ensure the p-value is small you’ll have to accept the effect size being small.)
OK. I think we may agree on the technical points. The issue may be with the use of the word “Bayesian”.
Me: But they aren’t guaranteed to eventually get a Bayesian to think the null hypothesis is likely to be false, when it is actually true.
You: Importantly, this is false! This statement is wrong if you have only one hypothesis rather than two.
I’m correct, by the usual definition of “Bayesian”, as someone who does inference by combining likelihood and prior. Bayesians always have more than one hypothesis (outside trivial situations where everything is known with certainty), with priors over them. In the example I gave, one can find a b such that the likelihood ratio with 0.5 is large, but the set of such b values will likely have low prior probability, so the Bayesian probably isn’t fooled. In contrast, a frequentist “pure significance test” does involve only one explicit hypothesis, though the choice of test statistic must in practice embody some implicit notion of what the alternative might be.
Beyond this, I’m not really interested in debating to what extent Yudkowsky did or did not understand all nuances of this problem.
A platonically perfect Bayesian given complete information and with accurate priors cannot be substantially fooled. But once again this is true regardless of whether I report p-values or likelihood ratios. p-values are fine.
It’s the key of my point, but you’re right that I should clarify the math here. Consider this part:
This is true for one hypothesis. It is NOT true if you know the alternative hypothesis. That is to say: suppose you are checking the p-value BOTH for the null hypothesis bias=0.5, AND for the alternate hypothesis bias=0.55. You check both p-values and see which is smaller. Now it is no longer true that you can keep collecting more data until their desired hypothesis wins; if the truth is bias=0.5, then after enough flips, the alternative hypothesis will never win again, and will always have astronomically small p-value.
To repeat: yes, you can disprove bias=0.5 with p<0.05; but at the time this happens, the alternative hypothesis of bias=0.55 might be disproven at p<10^{-100}. You are no longer guaranteed to win when there are two hypotheses rather than one.
Importantly, this is false! This statement is wrong if you have only one hypothesis rather than two.
More specifically, I claim that if a sequence of coin flip outcomes disproves bias=0.5 at some p-value p, then for the same sequence of coin flips, there exists a bias b such that the likelihood ratio between bias b and bias 0.5 is O(1/p):1. I’m not sure what the exact constant in the big-O notation is (I was trying to calculate it, and I think it’s at most 10). Suppose it’s 10. Then if you have p=0.001, you’ll have likelihood ratio 100:1 for some bias.
Therefore, to get the likelihood ratio as high as you wish, you could employ the following strategy. First, flip coins until the p value is very low, as you described. Then stop, and analyze the sequence of coin flips to determine the special bias b in my claimed theorem above. Then publish a paper claiming “the bias of the coin is b rather than 0.5, here’s my super high likelihood ratio”. This is guaranteed to work (with enough coinflips).
(Generally, if the number of coin flips is N, the bias b will be on the order of 1/2±O(1/√N), so it will be pretty close to 1⁄2; but once again, this is no different for what happens with the frequentist case, because to ensure the p-value is small you’ll have to accept the effect size being small.)
OK. I think we may agree on the technical points. The issue may be with the use of the word “Bayesian”.
Me: But they aren’t guaranteed to eventually get a Bayesian to think the null hypothesis is likely to be false, when it is actually true.
You: Importantly, this is false! This statement is wrong if you have only one hypothesis rather than two.
I’m correct, by the usual definition of “Bayesian”, as someone who does inference by combining likelihood and prior. Bayesians always have more than one hypothesis (outside trivial situations where everything is known with certainty), with priors over them. In the example I gave, one can find a b such that the likelihood ratio with 0.5 is large, but the set of such b values will likely have low prior probability, so the Bayesian probably isn’t fooled. In contrast, a frequentist “pure significance test” does involve only one explicit hypothesis, though the choice of test statistic must in practice embody some implicit notion of what the alternative might be.
Beyond this, I’m not really interested in debating to what extent Yudkowsky did or did not understand all nuances of this problem.
A platonically perfect Bayesian given complete information and with accurate priors cannot be substantially fooled. But once again this is true regardless of whether I report p-values or likelihood ratios. p-values are fine.