It depends only on the prior. I consider all these “stopping rule paradoxes” disguised cases where you give the Bayesian a bad prior, and the frequentist formula encodes a better prior.
Then you are doing a very confusing thing that isn’t likely to give much insight. Frequentist inference and Bayesian inference are different and it’s useful to at least understand both ideas(even if you reject frequentism).
Frequentists are bounding their error with various forms of the law of large numbers, they aren’t coherently integrating evidence. So saying the “frequentist encodes a better prior” is to miss the whole point of how frequentist statistics works.
And the point in the paper I linked has nothing to do with the prior, it’s about the bayes factor, which is independent of the prior. Most people who advocate Bayesian statistics in experiments advocate sharing bayes factors, not posteriors in order to abstract away the problem of prior construction.
And the point in the paper I linked has nothing to do with the prior, it’s about the bayes factor, which is independent of the prior.
Let me put it differently. Yes, your chance of getting a bayes factor of >3 is 1.8 with data peeking, as opposed to 1% without; but your chance of getting a higher factor also goes down, because you stop as soon as you reach 3. Your expected bayes factor is necessarily 1 weighted over your prior; you expect to find evidence for neither side. Changing the exact distribution of your results won’t change that.
My original response to this was wrong and has been deleted
I don’t think this has anything to do with logs, but rather that it is about the difference between probabilities and odds. Specifically, the Bayes factor works on the odds scale but the proof for conservation of expected evidence is on the regular probability scale
If you consider the posterior under all possible outcomes of the experiment, the ratio of the posterior probability to the prior probability will on average be 1 (when weighted by the probability of the outcome under your prior). However, the ratio of the posterior probability to the prior probability is not the same thing as the Bayes factor.
If you multiply the Bayes factor by the prior odds, and then transform the resulting quantity (ie the posterior) from the odds scale to a probability, and then divide by the prior probability, the resulting quantity will on average be 1
However, this is too complicated and doesn’t seem like a property that gives any additional insight on the Bayes factor..
I’m claiming the second. I was framing it in my mind as “on average, the factor will be 1”, but the kind of “average” required is the average log on further thought. I should probably use log in the future for statements like that.
This seems wrong then. Imagine you have two hypotheses, which you place equal probability on but then will see an observation that definitively selects one of the two as correct. E[p(x)] = 1⁄2 both before and after the observation, but E[log p (x)] is −1 vs—infinity.
I think we need to use actual limits then, instead of handwaving infinities. So let’s say the posterior for the unfavored hypothesis is e->0 (and is the same for both sides). The Bayes factor for the first hypothesis being confirmed is then (1-e)*3/(3e/2), which http://www.wolframalpha.com/input/?i=%281-e%29*3%2F%283e%2F2%29 simplifies to 2/e − 2. The Bayes factor for the second being confirmed is 3e/((1-e)3/2), which is again simplified http://www.wolframalpha.com/input/?i=3e%2F%28%281-e%293%2F2%29 to (2e)/(1-e).
Now, let me digress and derive the probability of finding evidence for each hypothesis; it’s almost but not quite 1/3:2/3. There’s a prior of 1⁄3 of the first hypothesis being true; this must equal the weighted expectation of the posteriors, by conservation of evidence. So if we call x the chance of finding evidence for hypothesis one, then x*(1-e)+(1-x)*e must equal 1⁄3. http://www.wolframalpha.com/input/?i=x*%281-e%29%2B%281-x%29*e%3D1%2F3+solve+for+x solves
x = (1-3 e)/(3-6 e)
which as a sanity check, does in fact head towards 1⁄3 as e goes towards 0. The corresponding probability of finding evidence for the second hypothesis is 1-x=(2-3 e)/(3-6 e).
Getting back to expected logs of Bayes factors, the chance of getting a bayes factor of 2/e − 2 is (1-3 e)/(3-6 e), while the chance of getting (2e)/(1-e) is (2-3 e)/(3-6 e).
Log of the first, times its probability, plus log of the second, times its probability, is http://www.wolframalpha.com/input/?i=log+%282%2Fx+-+2%29*+%281-3+x%29%2F%283-6+x%29%2Blog%28%282x%29%2F%281-x%29%29*+%282-3+x%29%2F%283-6+x%29%2Cx%3D.01 not zero.
Hm. I’ll need to think this over, this wasn’t what I expected. Either I made some mistake, or am misunderstanding something here. Let me think on this for a bit.
I think it’s not going to work out. The expected posterior is equal to the prior, but the expected log Bayes factor will have the form p log(K1) + (1-p) log(K2), which for general p is just a mess. Only when p=1/2 does it simplify to log(K1 K2), and when p=1/2, K2=1/K1, so the whole thing is zero.
Okay, so I think I worked out where my failed intuition got it from. The Bayes facter is the ratio of posterior/prior for hypothesis a, divided by the ratio for hypothesis B. The top of that is expected to be 1 (because the expected posterior over the prior is one, factoring out the prior in each case keeps that fraction constant), and the bottom is also (same argument), but the expected ratio of two numbers expected to be one is not always one. So my brain turned “denominator and numerator one” into “ratio one”.
I think it’s not going to work out. The expected posterior is equal to the prior, but the expected log Bayes factor will have the form p log(K1) + (1-p) log(K2), which for general p is just a mess. Only when p=1/2 does it simplify to log(K1 K2), and when p=1/2, K2=1/K1, so the whole thing is zero.
Your expected bayes factor is necessarily 1 weighted over your prior; you expect to find evidence for neither side.
I think this claim is correct on the natural scale except it should be weighted over probability of the data, not weighted over the prior. The margin of this comment is too small to contain the proof, so I’ll put a pdf in my public drop box folder at https://www.dropbox.com/s/vmom25u9ic7redu/Proof.pdf?dl=0
(I am slightly out of my depth here, I am not a mathematician or a Bayesian theorist, so I reserve the right to delete this comment if someone spots a flaw)
Then you are doing a very confusing thing that isn’t likely to give much insight. Frequentist inference and Bayesian inference are different and it’s useful to at least understand both ideas(even if you reject frequentism).
I think I understand frequentism. My claim here was that the specific claim of “the stopping rule paradox proves that frequentism does better than Bayes” is wrong, or is no stronger than the standard objection that Bayes relies on having good priors.
So saying the “frequentist encodes a better prior” is to miss the whole point of how frequentist statistics works.
What I meant is that you can get the same results as the frequentist in the stopping rule case if you adopt a particular prior. I might not be able to show that rigorously, though.
And the point in the paper I linked has nothing to do with the prior, it’s about the bayes factor, which is independent of the prior.
That paper only calculates what happens to the bayes factor when the null is true. There’s nothing that implies the inference will be wrong.
There are a couple different version of the stopping rule cases. Some are disguised priors, and some don’t affect calibration/inference or any Bayesian metrics.
That paper only calculates what happens to the bayes factor when the null is true. There’s nothing that implies the inference will be wrong.
That is the practical problem for statistics (the null is true, but the experimenter desperately wants it to be false). Everyone wants their experiment to be a success. The goal of this particular form of p-hacking is to increase the chance that you get a publishable result. The goal of the p-hacker is to increase the probability of type 1 error. A publication rule based on Bayes factors instead of p-values is still susceptible to optional stopping.
You seem to be saying that a rule based on posteriors would not be susceptible to such hacking?
You seem to be saying that a rule based on posteriors would not be susceptible to such hacking?
I’m saying that all inferences are still correct. So if your prior is correct/well calibrated, then your posterior is as well. If you end up with 100 studies that all found an effect for different things at a posterior of 95%, 5% of them should be wrong.
The goal of the p-hacker is to increase the probability of type 1 error.
So what I should say is that the Bayesian doesn’t care about the frequency of type 1 errors. If you’re going to criticise that, you can do so without regard to stopping rules. I gave an example in a different reply of hacking bayes factors, now I’ll give one with hacking posteriors:
Two kinds of coins: one fair, one 10%H/90%T. There are 1 billion of the fair ones, and 1 of the other kind. You take a coin, flip it 10 times, then say which coin you think it is. The Bayesian gets the biased coin, and no matter what he flips, will conclude that the coin is fair with overwhelming probability. The frequentist gets the coin, get ~9 tails, and says “no way is this fair”. There, the frequentist does better because the Bayesian’s prior is bad (I said there are a billion fair ones and only one biased one, but only looked at the biased ones).
It doesn’t matter if you always conclude with 95% posterior that the null is false when it is true, as long as you have 20 times as many cases that the null is actually false. Yes, this opens you up to being tricked; but if you’re worried about deliberate deception, you should include a prior over that. If you’re worried about publication bias when reading other studies, include a prior over that, etc.
I’m saying that all inferences are still correct. So if your prior is correct/well calibrated, then your posterior is as well. If you end up with 100 studies that all found an effect for different things at a posterior of 95%, 5% of them should be wrong.
But that is based on the posterior.
When I ask for clarification, you seem to be doing two things:
changing the subject to posteriors
asserting that a perfect prior leads to a perfect posterior.
I think 2 is uncontroversial, other than if you have a perfect prior why do any experiment at all? But it is also not what is being discussed. The issue is that with optional stopping you bias the Bayes factor.
As another poster mentioned, expected evidence is conserved. So let’s think of this like a frequentist who has a laboratory full of bayesians in cages. Each Bayesian gets one set of data collected via a standard protocol. Without optional stopping, most of the Bayesians get similar evidence, and they all do roughly the same updates.
With optional stopping, you’ll create either short sets of stopped data that support the favored hypothesis or very long sets of data that fail to support the favored hypothesis. So you might be able to create a rule that fools 99 out of the 100 Bayesians, but the remaining Baysian is going to be very strongly convinced of the disfavored hypothesis.
Where the Bayesian wins over the frequentist is that if you let the Bayesians out of the cages to talk, and they share their likelihood ratios, they can coherently combine evidence and the 1 correct Bayesian will convince all the incorrect Bayesians of the proper update. With frequentists, fewer will be fooled, but there isn’t a coherent way to combine the confidence intervals.
So the issue for scientists writing papers is that if you are a Bayesian adopt the second, optional stopped experimental protocol (lets say it really can fool 99 out of 100 Bayesians) then at least 99 out of 100 of the experiments you run will be a success (some of the effects really will be real). The 1⁄100 that fails miserably doesn’t have to be published.
Even if it is published, if two experimentalists both average to the truth, the one who paints most of his results as experimental successes probably goes further in his career.
I think 2 is uncontroversial, other than if you have a perfect prior why do any experiment at all?
By perfect I mean well calibrated. I don’t see why knowing that your priors in general are well calibrated implies that more information doesn’t have positive expected utility.
The issue is that with optional stopping you bias the Bayes factor.
Only in some cases, and only with regard to someone who knows more than the Bayesian. The Bayesian himself can’t predict that the factor will be biased; the expected factor should be 1. It’s only someone who knows better that can predict this.
So let’s think of this like a frequentist who has a laboratory full of bayesians in cages.
Before I analyse this case, can you clarify whether the hypothesis happens to be true, false, or chosen at random? Also give these Bayesians’ priors, and perhaps an example of the rule you’d use.
Before I analyse this case, can you clarify whether the hypothesis happens to be true, false, or chosen at random? Also give these Bayesians’ priors, and perhaps an example of the rule you’d use.
Again, the prior doesn’t matter, they are computing Bayes factors. We are talking about Bayes factors. Bayes factors. Prior doesn’t matter. Bayes factors. Prior.Doesn’t.Matter. Bayes factors. Prior.Doesn’t.Matter. Bayes.factor.
Let’s say the null is true, but the frequentist mastermind has devised some data generating process that (let’s say he has infinite data at his disposal) that can produce evidence in favor of competing hypothesis at a Bayes factor of 3, 99% of the time.
Again, the prior doesn’t matter, they are computing Bayes factors.
It matters here, because you said “So you might be able to create a rule that fools 99 out of the 100 Bayesians”. The probability of getting data given a certain rule depends on which hypothesis is true, and if we’re assuming the hypothesis is like the prior, then we need to know the prior to calculate those numbers.
Let’s say the null is true, but the frequentist mastermind has devised some data generating process that (let’s say he has infinite data at his disposal) that can produce evidence in favor of competing hypothesis at a Bayes factor of 3, 99% of the time.
Using either Bayesian HDI with ROPE, or a Bayes factor, the false alarm rate asymptotes at a level far less than 100% (e.g., 20-25%). In other words, using Bayesian methods, the null hypothesis is accepted when it is true, even with sequential testing of every datum, perhaps 75-80% of the time.
In fact, you can show easily that this can succeed at most 33% of the time. By definition, the Bayes factor is how likely the data is given one hypothesis, divided by how likely the data is given the other. The data in the class “results in a bayes factor of 3 against the null” has a certain chance of happening given that the null is true, say p. This class of course contains many individual mutually exclusive sets of data, each with a far lower probability, but they sum to p. Now, the chance of this class of possible data sets happening given that the null is not true has an upper bound of 1. Each individual probability (which collectively sum to at most 1) must be 3 times as much as the corresponding probability in the group that sums to p. Ergo, p is upper bounded by 33%.
In simulation, I start to asymptote to around 20%, with a coin flip, but estimating mean from a normal distribution (with the null being 0) with fixed variance I keep climbing indefinitely. If you are willing to sample literally forever it seems like you’d be able to convince the Bayesian that the mean is not 0 with arbitrary Bayes factor. So for large enough N in a sample, I expect you can get a factor of 3 for 99⁄100 of the Bayesians in cages (so long as that last Bayesian is really, really sure the value is 0).
But it doesn’t change the results if we switch and say we fool 33% of the Bayesians with Bayes factor of 3. We are still fooling them.
If you are willing to sample literally forever it seems like you’d be able to convince the Bayesian that the mean is not 0 with arbitrary Bayes factor.
Instead, as pointed out by Edwards et al. (1963, p. 239):
“(...) if you set out to collect data until your posterior probability for a hypothesis which
unknown to you is true has been reduced to .01, then 99 times out of 100 you will never
make it, no matter how many data you, or your children after you, may collect (...)”.
If you can generate arbitrarily high Bayes factors, then you can reduce your posterior to .01, which means that it can only happen 1 in 100 times. You can never have a guarantee of always getting strong evidence for a false hypothesis. If you find a case that does, it will be new to me and probably change my mind.
But it doesn’t change the results if we switch and say we fool 33% of the Bayesians with Bayes factor of 3. We are still fooling them.
That doesn’t concern me. I’m not going to argue for why, I’ll just point out that if it is a problem, it has absolutely nothing to do with optional stopping. The exact same behavior (probability 1⁄3 of generating a Bayes factor of 3 in favor of a false hypothesis) shows up in the following case: a coin either always lands on heads, or lands on heads 1⁄3 of the time and tails 2⁄3 of the time. I flip the coin a single time. Let’s say the coin is the second coin. There’s a 33% chance of getting heads, which would produce a Bayes factor of 3 in favor of the 100%H coin.
If there’s something wrong with that, it’s a problem with classic Bayes, not optional stopping.
It is my thesis that every optional stopping so-called paradox can be converted into a form without optional stopping, and those will be clearer as to whether the problem is real or not.
I can check my simulation for bugs. I don’t have the referenced textbook to check the result being suggested.
It is my thesis that every optional stopping so-called paradox can be converted into a form without optional stopping, and those will be clearer as to whether the problem is real or not.
The first part of this is trivially true. Replace the original distribution with the sampling distribution from the stopped problem, and it’s not longer a stopped problem, it’s normal pulls from that sampling distribution.
I’m not sure it’s more clear,I think it is not. Your “remapped” problem makes it look like it’s a result of low-data-volume and not a problem of how the sampling distribution was actually constructed.
Replace the original distribution with the sampling distribution from the stopped problem, and it’s not longer a stopped problem, it’s normal pulls from that sampling distribution.
How would this affect a frequentist?
I’m not sure it’s more clear,I think it is not. Your “remapped” problem makes it look like it’s a result of low-data-volume and not a problem of how the sampling distribution was actually constructed.
I’m giving low data because those are the simplest kinds of cases to think of. If you had lots of data with the same distribution/likelihood, it would be the same. I leave it as an exercise to find a case with lots of data and the same underlying distribution …
I was mainly trying to convince you that nothing’s actually wrong with having 33% false positive rate in contrived cases.
It doesn’t the frequentist is already measuring with the sample distribution. That is how frequentism works.
I was mainly trying to convince you that nothing’s actually wrong with having 33% false positive rate in contrived cases.
I mean it’s not “wrong” but if you care about false positive rates and there is a method had has a 5% false positive rate, wouldn’t you want to use that instead?
Then you are doing a very confusing thing that isn’t likely to give much insight. Frequentist inference and Bayesian inference are different and it’s useful to at least understand both ideas(even if you reject frequentism).
Frequentists are bounding their error with various forms of the law of large numbers, they aren’t coherently integrating evidence. So saying the “frequentist encodes a better prior” is to miss the whole point of how frequentist statistics works.
And the point in the paper I linked has nothing to do with the prior, it’s about the bayes factor, which is independent of the prior. Most people who advocate Bayesian statistics in experiments advocate sharing bayes factors, not posteriors in order to abstract away the problem of prior construction.
Let me put it differently. Yes, your chance of getting a bayes factor of >3 is 1.8 with data peeking, as opposed to 1% without; but your chance of getting a higher factor also goes down, because you stop as soon as you reach 3. Your expected bayes factor is necessarily 1 weighted over your prior; you expect to find evidence for neither side. Changing the exact distribution of your results won’t change that.
Should that say, rather, that its expected log is zero? A factor of n being as likely as a factor of 1/n.
My original response to this was wrong and has been deleted
I don’t think this has anything to do with logs, but rather that it is about the difference between probabilities and odds. Specifically, the Bayes factor works on the odds scale but the proof for conservation of expected evidence is on the regular probability scale
If you consider the posterior under all possible outcomes of the experiment, the ratio of the posterior probability to the prior probability will on average be 1 (when weighted by the probability of the outcome under your prior). However, the ratio of the posterior probability to the prior probability is not the same thing as the Bayes factor.
If you multiply the Bayes factor by the prior odds, and then transform the resulting quantity (ie the posterior) from the odds scale to a probability, and then divide by the prior probability, the resulting quantity will on average be 1
However, this is too complicated and doesn’t seem like a property that gives any additional insight on the Bayes factor..
That’s probably a better way of putting it. I’m trying to intuitively capture the idea of “no expected evidence”, you can frame that in multiple ways.
Huh? E[X] = 1 and E[\log(X)] = 0 are two very different claims; which one are you actually claiming?
Also, what is the expectation with respect to? Your prior or the data distribution or something else?
I’m claiming the second. I was framing it in my mind as “on average, the factor will be 1”, but the kind of “average” required is the average log on further thought. I should probably use log in the future for statements like that.
The prior.
This seems wrong then. Imagine you have two hypotheses, which you place equal probability on but then will see an observation that definitively selects one of the two as correct. E[p(x)] = 1⁄2 both before and after the observation, but E[log p (x)] is −1 vs—infinity.
In that case, your Bayes Factor will be either 2⁄0, or 0⁄2.
Log of the first is infinity, log of the second is negative infinity.
The average of those two numbers is insert handwave here 0.
(If you use the formula for log of divisions, this actually works).
Replace 1⁄2 and 1⁄2 in the prior with 1⁄3 and 2⁄3, and I don’t think you can make them cancel anymore.
I think we need to use actual limits then, instead of handwaving infinities. So let’s say the posterior for the unfavored hypothesis is e->0 (and is the same for both sides). The Bayes factor for the first hypothesis being confirmed is then (1-e)*3/(3e/2), which http://www.wolframalpha.com/input/?i=%281-e%29*3%2F%283e%2F2%29 simplifies to 2/e − 2. The Bayes factor for the second being confirmed is 3e/((1-e)3/2), which is again simplified http://www.wolframalpha.com/input/?i=3e%2F%28%281-e%293%2F2%29 to (2e)/(1-e).
Now, let me digress and derive the probability of finding evidence for each hypothesis; it’s almost but not quite 1/3:2/3. There’s a prior of 1⁄3 of the first hypothesis being true; this must equal the weighted expectation of the posteriors, by conservation of evidence. So if we call x the chance of finding evidence for hypothesis one, then x*(1-e)+(1-x)*e must equal 1⁄3. http://www.wolframalpha.com/input/?i=x*%281-e%29%2B%281-x%29*e%3D1%2F3+solve+for+x solves
which as a sanity check, does in fact head towards 1⁄3 as e goes towards 0. The corresponding probability of finding evidence for the second hypothesis is 1-x=(2-3 e)/(3-6 e).
Getting back to expected logs of Bayes factors, the chance of getting a bayes factor of 2/e − 2 is (1-3 e)/(3-6 e), while the chance of getting (2e)/(1-e) is (2-3 e)/(3-6 e).
Log of the first, times its probability, plus log of the second, times its probability, is http://www.wolframalpha.com/input/?i=log+%282%2Fx+-+2%29*+%281-3+x%29%2F%283-6+x%29%2Blog%28%282x%29%2F%281-x%29%29*+%282-3+x%29%2F%283-6+x%29%2Cx%3D.01 not zero.
Hm. I’ll need to think this over, this wasn’t what I expected. Either I made some mistake, or am misunderstanding something here. Let me think on this for a bit.
Hopefully I’ll update this soon with an answer.
I think it’s not going to work out. The expected posterior is equal to the prior, but the expected log Bayes factor will have the form p log(K1) + (1-p) log(K2), which for general p is just a mess. Only when p=1/2 does it simplify to log(K1 K2), and when p=1/2, K2=1/K1, so the whole thing is zero.
Okay, so I think I worked out where my failed intuition got it from. The Bayes facter is the ratio of posterior/prior for hypothesis a, divided by the ratio for hypothesis B. The top of that is expected to be 1 (because the expected posterior over the prior is one, factoring out the prior in each case keeps that fraction constant), and the bottom is also (same argument), but the expected ratio of two numbers expected to be one is not always one. So my brain turned “denominator and numerator one” into “ratio one”.
I think it’s not going to work out. The expected posterior is equal to the prior, but the expected log Bayes factor will have the form p log(K1) + (1-p) log(K2), which for general p is just a mess. Only when p=1/2 does it simplify to log(K1 K2), and when p=1/2, K2=1/K1, so the whole thing is zero.
I think this claim is correct on the natural scale except it should be weighted over probability of the data, not weighted over the prior. The margin of this comment is too small to contain the proof, so I’ll put a pdf in my public drop box folder at https://www.dropbox.com/s/vmom25u9ic7redu/Proof.pdf?dl=0
(I am slightly out of my depth here, I am not a mathematician or a Bayesian theorist, so I reserve the right to delete this comment if someone spots a flaw)
I think I understand frequentism. My claim here was that the specific claim of “the stopping rule paradox proves that frequentism does better than Bayes” is wrong, or is no stronger than the standard objection that Bayes relies on having good priors.
What I meant is that you can get the same results as the frequentist in the stopping rule case if you adopt a particular prior. I might not be able to show that rigorously, though.
That paper only calculates what happens to the bayes factor when the null is true. There’s nothing that implies the inference will be wrong.
There are a couple different version of the stopping rule cases. Some are disguised priors, and some don’t affect calibration/inference or any Bayesian metrics.
That is the practical problem for statistics (the null is true, but the experimenter desperately wants it to be false). Everyone wants their experiment to be a success. The goal of this particular form of p-hacking is to increase the chance that you get a publishable result. The goal of the p-hacker is to increase the probability of type 1 error. A publication rule based on Bayes factors instead of p-values is still susceptible to optional stopping.
You seem to be saying that a rule based on posteriors would not be susceptible to such hacking?
I’m saying that all inferences are still correct. So if your prior is correct/well calibrated, then your posterior is as well. If you end up with 100 studies that all found an effect for different things at a posterior of 95%, 5% of them should be wrong.
So what I should say is that the Bayesian doesn’t care about the frequency of type 1 errors. If you’re going to criticise that, you can do so without regard to stopping rules. I gave an example in a different reply of hacking bayes factors, now I’ll give one with hacking posteriors:
Two kinds of coins: one fair, one 10%H/90%T. There are 1 billion of the fair ones, and 1 of the other kind. You take a coin, flip it 10 times, then say which coin you think it is. The Bayesian gets the biased coin, and no matter what he flips, will conclude that the coin is fair with overwhelming probability. The frequentist gets the coin, get ~9 tails, and says “no way is this fair”. There, the frequentist does better because the Bayesian’s prior is bad (I said there are a billion fair ones and only one biased one, but only looked at the biased ones).
It doesn’t matter if you always conclude with 95% posterior that the null is false when it is true, as long as you have 20 times as many cases that the null is actually false. Yes, this opens you up to being tricked; but if you’re worried about deliberate deception, you should include a prior over that. If you’re worried about publication bias when reading other studies, include a prior over that, etc.
But that is based on the posterior.
When I ask for clarification, you seem to be doing two things:
changing the subject to posteriors
asserting that a perfect prior leads to a perfect posterior.
I think 2 is uncontroversial, other than if you have a perfect prior why do any experiment at all? But it is also not what is being discussed. The issue is that with optional stopping you bias the Bayes factor.
As another poster mentioned, expected evidence is conserved. So let’s think of this like a frequentist who has a laboratory full of bayesians in cages. Each Bayesian gets one set of data collected via a standard protocol. Without optional stopping, most of the Bayesians get similar evidence, and they all do roughly the same updates.
With optional stopping, you’ll create either short sets of stopped data that support the favored hypothesis or very long sets of data that fail to support the favored hypothesis. So you might be able to create a rule that fools 99 out of the 100 Bayesians, but the remaining Baysian is going to be very strongly convinced of the disfavored hypothesis.
Where the Bayesian wins over the frequentist is that if you let the Bayesians out of the cages to talk, and they share their likelihood ratios, they can coherently combine evidence and the 1 correct Bayesian will convince all the incorrect Bayesians of the proper update. With frequentists, fewer will be fooled, but there isn’t a coherent way to combine the confidence intervals.
So the issue for scientists writing papers is that if you are a Bayesian adopt the second, optional stopped experimental protocol (lets say it really can fool 99 out of 100 Bayesians) then at least 99 out of 100 of the experiments you run will be a success (some of the effects really will be real). The 1⁄100 that fails miserably doesn’t have to be published.
Even if it is published, if two experimentalists both average to the truth, the one who paints most of his results as experimental successes probably goes further in his career.
Can’t frequentists just pool their data and then generate a new confidence interval from the supersized sample?
By perfect I mean well calibrated. I don’t see why knowing that your priors in general are well calibrated implies that more information doesn’t have positive expected utility.
Only in some cases, and only with regard to someone who knows more than the Bayesian. The Bayesian himself can’t predict that the factor will be biased; the expected factor should be 1. It’s only someone who knows better that can predict this.
Before I analyse this case, can you clarify whether the hypothesis happens to be true, false, or chosen at random? Also give these Bayesians’ priors, and perhaps an example of the rule you’d use.
Again, the prior doesn’t matter, they are computing Bayes factors. We are talking about Bayes factors. Bayes factors. Prior doesn’t matter. Bayes factors. Prior.Doesn’t.Matter. Bayes factors. Prior.Doesn’t.Matter. Bayes.factor.
Let’s say the null is true, but the frequentist mastermind has devised some data generating process that (let’s say he has infinite data at his disposal) that can produce evidence in favor of competing hypothesis at a Bayes factor of 3, 99% of the time.
It matters here, because you said “So you might be able to create a rule that fools 99 out of the 100 Bayesians”. The probability of getting data given a certain rule depends on which hypothesis is true, and if we’re assuming the hypothesis is like the prior, then we need to know the prior to calculate those numbers.
That’s impossible. http://doingbayesiandataanalysis.blogspot.com/2013/11/optional-stopping-in-data-collection-p.html goes through the math.
In fact, you can show easily that this can succeed at most 33% of the time. By definition, the Bayes factor is how likely the data is given one hypothesis, divided by how likely the data is given the other. The data in the class “results in a bayes factor of 3 against the null” has a certain chance of happening given that the null is true, say p. This class of course contains many individual mutually exclusive sets of data, each with a far lower probability, but they sum to p. Now, the chance of this class of possible data sets happening given that the null is not true has an upper bound of 1. Each individual probability (which collectively sum to at most 1) must be 3 times as much as the corresponding probability in the group that sums to p. Ergo, p is upper bounded by 33%.
I think this is problem dependent.
In simulation, I start to asymptote to around 20%, with a coin flip, but estimating mean from a normal distribution (with the null being 0) with fixed variance I keep climbing indefinitely. If you are willing to sample literally forever it seems like you’d be able to convince the Bayesian that the mean is not 0 with arbitrary Bayes factor. So for large enough N in a sample, I expect you can get a factor of 3 for 99⁄100 of the Bayesians in cages (so long as that last Bayesian is really, really sure the value is 0).
But it doesn’t change the results if we switch and say we fool 33% of the Bayesians with Bayes factor of 3. We are still fooling them.
No, there’s a limit on that as well. See http://www.ejwagenmakers.com/2007/StoppingRuleAppendix.pdf
If you can generate arbitrarily high Bayes factors, then you can reduce your posterior to .01, which means that it can only happen 1 in 100 times. You can never have a guarantee of always getting strong evidence for a false hypothesis. If you find a case that does, it will be new to me and probably change my mind.
That doesn’t concern me. I’m not going to argue for why, I’ll just point out that if it is a problem, it has absolutely nothing to do with optional stopping. The exact same behavior (probability 1⁄3 of generating a Bayes factor of 3 in favor of a false hypothesis) shows up in the following case: a coin either always lands on heads, or lands on heads 1⁄3 of the time and tails 2⁄3 of the time. I flip the coin a single time. Let’s say the coin is the second coin. There’s a 33% chance of getting heads, which would produce a Bayes factor of 3 in favor of the 100%H coin.
If there’s something wrong with that, it’s a problem with classic Bayes, not optional stopping.
It is my thesis that every optional stopping so-called paradox can be converted into a form without optional stopping, and those will be clearer as to whether the problem is real or not.
I can check my simulation for bugs. I don’t have the referenced textbook to check the result being suggested.
The first part of this is trivially true. Replace the original distribution with the sampling distribution from the stopped problem, and it’s not longer a stopped problem, it’s normal pulls from that sampling distribution.
I’m not sure it’s more clear,I think it is not. Your “remapped” problem makes it look like it’s a result of low-data-volume and not a problem of how the sampling distribution was actually constructed.
You can see http://projecteuclid.org/euclid.aoms/1177704038, which proves the result.
How would this affect a frequentist?
I’m giving low data because those are the simplest kinds of cases to think of. If you had lots of data with the same distribution/likelihood, it would be the same. I leave it as an exercise to find a case with lots of data and the same underlying distribution …
I was mainly trying to convince you that nothing’s actually wrong with having 33% false positive rate in contrived cases.
It doesn’t the frequentist is already measuring with the sample distribution. That is how frequentism works.
I mean it’s not “wrong” but if you care about false positive rates and there is a method had has a 5% false positive rate, wouldn’t you want to use that instead?
If for some reason low false positive rates were important, sure. If it’s important enough to give up consistency.