You can generate an experiment that has a high chance (let’s say 99%) of making a Bayesian have a 20:1 likelihood ratio in favor of some hypothesis.
This is wrong, unless I’ve misunderstood you. Imagine the prior for hypothesis H is p, hence the prior for ~H is 1-p. If you have a 99% chance of generating a 20:1 likelihood for H, then your prior must be bounded below by .99*(20p/19p+1). (The second term is the posterior for H if you have a 20:1 likelihood). So we have the inequality p> .99*(20p/19p+1), which I was lazy and used http://www.wolframalpha.com/input/?i=p%3E+.99*%2820p%29%2F%2819p%2B1%29%2C+0%3Cp%3C1 to solve, which tells me that p must be at least 0.989474.
So you can only expect to generate strong evidence for a hypothesis if you’re already pretty sure of it, which is just as it should be.
I may have bungled these calculations, doing them quickly, though.
That’s exactly what I used it for in my calculation, I didn’t misunderstand that. Your computation of “conservation of expected evidence” simply does not work unless your prior is extremely high to begin with. Put simply, you cannot be 99% sure that you’ll later change your current belief in H of p to anything greater than 100*p/99, which places a severe lower bound on p for a likelihood ratio of 20:1.
Yes! It worked! I learned something by getting embarrassed online!!!
ike, you’re absolutely correct. I applied conservation of expected evidence to likelihood ratios instead of to posterior probabilities, and thus didn’t realize that the prior puts bounds on expected likelihood ratios. This also means that the numbers I suggested (1% of 1:2000, 99% of 20:1) define the prior precisely at 98.997%.
I’m going to leave the fight to defend the reputation of Bayesian inference to you and go do some math exercises.
This is wrong, unless I’ve misunderstood you. Imagine the prior for hypothesis H is p, hence the prior for ~H is 1-p. If you have a 99% chance of generating a 20:1 likelihood for H, then your prior must be bounded below by .99*(20p/19p+1). (The second term is the posterior for H if you have a 20:1 likelihood). So we have the inequality p> .99*(20p/19p+1), which I was lazy and used http://www.wolframalpha.com/input/?i=p%3E+.99*%2820p%29%2F%2819p%2B1%29%2C+0%3Cp%3C1 to solve, which tells me that p must be at least 0.989474.
So you can only expect to generate strong evidence for a hypothesis if you’re already pretty sure of it, which is just as it should be.
I may have bungled these calculations, doing them quickly, though.
Edit: removed for misunderstanding ike’s question and giving an irrelevant answer. Huge thanks to ike for teaching me math.
That’s exactly what I used it for in my calculation, I didn’t misunderstand that. Your computation of “conservation of expected evidence” simply does not work unless your prior is extremely high to begin with. Put simply, you cannot be 99% sure that you’ll later change your current belief in H of p to anything greater than 100*p/99, which places a severe lower bound on p for a likelihood ratio of 20:1.
Yes! It worked! I learned something by getting embarrassed online!!!
ike, you’re absolutely correct. I applied conservation of expected evidence to likelihood ratios instead of to posterior probabilities, and thus didn’t realize that the prior puts bounds on expected likelihood ratios. This also means that the numbers I suggested (1% of 1:2000, 99% of 20:1) define the prior precisely at 98.997%.
I’m going to leave the fight to defend the reputation of Bayesian inference to you and go do some math exercises.