I don’t believe this is true. Every individual trial is individual Bayesian evidence, unrelated to the rest of the trials except in the fact that your priors are different. If you run until significance you will have updated to a certain probability, and if you run until you’re bored you’ll also have updated to a certain probability.
You have to be very careful you’re actually asking the same question in both cases. In the case I tested above, I was asking exactly the same question (my intuition said very strongly that I wasn’t, but that’s because I was thinking of the very similar but subtly different question below). The “fairly obvious in retrospect” refers to that particular phrasing of the problem (I would have immediately understood that the probabilities had to be equal if I had phrased it that way, but since I didn’t, that insight was a little harder-earned).
The question I was actually thinking of is as follows.
Scenario A: You run 12 trials, then check whether your odds ratio reaches significance and report your results.
Scenario B: You run trials until either your odds ratio reaches significance or you hit 12 trials, then report your results.
I think scenario A is different from scenario B, and that’s the one I was thinking of (it’s the “run subjects until you hit significance or run out of funding” model).
A new program confirms my intuition about the question I had been thinking of when I decided to test it. I agree with Eliezer that it shouldn’t matter whether the researcher goes to a certain number of trials or a certain number of positive results, but I disagree with the implication that the same dataset always gives you the same information.
The program is here, you can fiddle with the parameters if you want to look at the result yourself.
formatting sucks
Try this:
I did. It didn’t indent properly. I tried again, and it still doesn’t.
You have to be very careful you’re actually asking the same question in both cases. In the case I tested above, I was asking exactly the same question (my intuition said very strongly that I wasn’t, but that’s because I was thinking of the very similar but subtly different question below). The “fairly obvious in retrospect” refers to that particular phrasing of the problem (I would have immediately understood that the probabilities had to be equal if I had phrased it that way, but since I didn’t, that insight was a little harder-earned).
The question I was actually thinking of is as follows.
Scenario A: You run 12 trials, then check whether your odds ratio reaches significance and report your results.
Scenario B: You run trials until either your odds ratio reaches significance or you hit 12 trials, then report your results.
I think scenario A is different from scenario B, and that’s the one I was thinking of (it’s the “run subjects until you hit significance or run out of funding” model).
A new program confirms my intuition about the question I had been thinking of when I decided to test it. I agree with Eliezer that it shouldn’t matter whether the researcher goes to a certain number of trials or a certain number of positive results, but I disagree with the implication that the same dataset always gives you the same information.
The program is here, you can fiddle with the parameters if you want to look at the result yourself.
I did. It didn’t indent properly. I tried again, and it still doesn’t.