My point was this idea that the stopping rule doesn’t matter is more complicated than calculating a Bayes factor and saying “look, the stopping rule doesn’t change the Bayes factor.”
The stopping rule won’t change the expectation of the Bayes factor.
Sometimes we want a 95% confidence interval to mean that doing this 100 times will include the true value about 95 times.
If your prior is correct, then your 95% credibility interval will, in fact, be well calibrated and be correct 95% of the time. I argued at length on tumblr that most or all of the force of the stopping rule objection to Bayes is a disguised “you have a bad prior” situation. If you’re willing to ask the question that way, you can generate similar cases without stopping rules as well. For instance, imagine there are two kinds of coins; ones that land on heads 100% of the time, and ones that land on heads 20% of the time. (The rest are tails.) You have one flip with the coin. Oh, one more thing: I tell you that there are 1 billion coins of the first kind, and only one of the second kind.
You flip the coin once. It’s easy to show that there’s an overwhelming probability of getting a 20:1 likelihood in favor of the first coin. Why is this problematic?
I can and have given a similar case for 95% credibility intervals as opposed to Bayes factors, which I’ll write out if you’re interested.
The stopping rule won’t change the expectation of the Bayes factor.
If your prior is correct, then your 95% credibility interval will, in fact, be well calibrated and be correct 95% of the time. I argued at length on tumblr that most or all of the force of the stopping rule objection to Bayes is a disguised “you have a bad prior” situation. If you’re willing to ask the question that way, you can generate similar cases without stopping rules as well. For instance, imagine there are two kinds of coins; ones that land on heads 100% of the time, and ones that land on heads 20% of the time. (The rest are tails.) You have one flip with the coin. Oh, one more thing: I tell you that there are 1 billion coins of the first kind, and only one of the second kind.
You flip the coin once. It’s easy to show that there’s an overwhelming probability of getting a 20:1 likelihood in favor of the first coin. Why is this problematic?
I can and have given a similar case for 95% credibility intervals as opposed to Bayes factors, which I’ll write out if you’re interested.