Bayesian theorem is just one of many mathematical equations, like for example Pythagorean theorem. There is inherently nothing magical about it.
It just happens to explain one problem with the current scientific publishing process: neglecting base rates. Which sometimes seems like this: “I designed an experiment that would prove a false hypothesis only with probability p = 0.05. My experiment has succeeded. Please publish my paper in your journal!”
(I guess I am exaggerating a bit here, but many people ‘doing science’ would not understand immediately what is wrong with this. And that would be those who even bother to calculate the p-value. Not everyone who is employed as a scientist is necessarily good at math. Many people get paid for doing bad science.)
This kind of thinking has the following problem: Even if you invent hundred completely stupid hypotheses; if you design experiments that would prove a false hypothesis only with p = 0.05, that means five of them would be proved by the experiment. If you show someone else all hundred experiments together, they may understand what is wrong. But you are more likely to send only the successful five ones to the journal, aren’t you? -- But how exactly is the journal supposed to react to this? Should they ask: “Did you do many other experiments, even ones completely irrelevant to this specific hypothesis? Because, you know, that somehow undermines the credibility of this one.”
The current scientific publishing process has a bias. Bayesian theorem explains it. We care about science, and we care about science being done correctly.
It just happens to explain one problem with the current scientific publishing process: neglecting base rates. Which sometimes seems like this: “I designed an experiment that would prove a false hypothesis only with probability p = 0.05. My experiment has succeeded. Please publish my paper in your journal!”
That’s not neglecting base rates, that’s called selection bias combined with incentives to publish. Bayes theorem isn’t going to help you with this.
If I understand it correctly, selection bias is when 20 researchers make an experiment with green jelly beans, 19 of them don’t find significant correlation, 1 of them finds it… and only the 1 publishes, and the 19 don’t. The essence is that we had 19 pieces of evidence against the green jelly beans, only 1 piece of evidence for the green jelly beans, but we don’t see those 19 pieces, because they are not published. Selection = “there is X and Y, but we don’t see Y, because it was filtered out by the process that gives us information”.
But imagine that you are the first researcher ever who has researched the jelly beans. And you only did one experiment. And it happened to succeed. Where is the selection here? (Perhaps selection across Everett branches or Tegmark universes. But we can’t blame the scientific publishing process for not giving us information from the parallel universes, can we?)
In this case, base rate neglect means ignoring the fact that “if you take a random thing, the probability that this specific thing causes acne is very low”. Therefore, even if the experiment shows a connection with p = 0.05, it’s still more likely that the result just happened randomly.
The proper reasoning could be something like this (all number pulled out of the hat) -- we already have pretty strong evidence that acne is caused by food; let’s say there is a 50% probability for this. With enough specificity (giving each fruit a different category, etc.), there are maybe 2000 categories of food. It is possible that more then one of them cause acne, and our probability distribution for that is… something. Considering all this information, we estimate a prior probability let’s say 0.0004 that a random food causes acne. -- Which means that if the correlation is significant on level p = 0.05, that per se means almost nothing. (Here one could use the Bayes theorem to calculate that the p = 0.05 successful experiment shows the true cause of acne with probablity cca 1%.) We need to increase it to p = 0.0004 just to get a 50% chance of being right. How can we do that? We should use a much larger sample, or we should repeat the experiment many times, record all the successed and failures, and do a meta-analysis.
But imagine that you are the first researcher ever who has researched the jelly beans. And you only did one experiment. And it happened to succeed. Where is the selection here?
That’s a different case—you have no selection bias here, but your conclusions are still uncertain—if you pick p=0.05 as your threshold, you’re clearly accepting that there is a 5% chance of a Type I error: the green jelly beans did nothing, but the noise happened to be such that you interpreted it as conclusive evidence in favor of your hypothesis.
But that all is fine—the readers of scientific papers are expected to understand that results significant to p=0.05 will be wrong around 5% of the times, more or less (not exactly because the usual test measures P(D|H), the probability of the observed data given the (null) hypothesis while you really want P(H|D), the probability of the hypothesis given the data).
base rate neglect means ignoring the fact that “if you take a random thing, the probability that this specific thing causes acne is very low”
People rarely take entirely random things and test them for causal connection to acne. Notice how you had to do a great deal of handwaving in establishing your prior (aka the base rate).
As an exercise, try to be specific. For example, let’s say I want to check if the tincture made from the bark of a certain tree helps with acne. How would I go about calculating my base rate / prior? Can you walk me through an estimation which will end with a specific number?
the readers of scientific papers are expected to understand that results significant to p=0.05 will be wrong around 5% of the times, more or less
And this is the base rate neglect. It’s not “results significant to p=0.05 will be wrong about 5% of time”. It’s “wrong results will be significant to p=0.05 about 5% of time”. And most people will confuse these two things.
It’s like when people confuse “A ⇒ B” with “B ⇒ A”, only this time it is “A ⇒ B (p=0.05)” with “B ⇒ A (p=0.05)”. It is “if wrong, then in 5% significant”. It is not “if significant, then in 5% wrong”.
Notice how you had to do a great deal of handwaving in establishing your prior (aka the base rate).
Yes, you are right. Establishing the prior is pretty difficult, perhaps impossible. (But that does not make “A ⇒ B” equal to “B ⇒ A”.) Probably the reasonable thing to do would be simply to impose strict limits in areas where many results were proved wrong.
Probably the reasonable thing to do would be simply to impose strict limits in areas where many results were proved wrong.
Um, what “strict limits” are you talking about, what will they look like, and who will be doing the imposing?
To get back to my example, let’s say I’m running experiments to check if the tincture made from the bark of a certain tree helps with acne—what strict limits would you like?
p = 0.001, and if at the end of the year too many researches fail to replicate, keep decreasing. (let’s say that “fail to replicate” in this context means that the replication attempt cannot prove it even with p = 0.05 -- we don’t want to make replications too expensive, just a simple sanity check)
let’s say I’m running experiments to check if the tincture made from the bark of a certain tree helps with acne—what strict limits would you like?
a long answer would involve a lot of handwaving again (it depends on why do you believe the bark is helpful; in other words, what other evidence do you already have)
Well, and what’s magical about this particular number? Why not p=0.01? why not p=0.0001? Confidence thresholds are arbitrary, do you have a compelling argument why any particular one is better than the rest?
Besides, you’re forgetting the costs. Assume that the reported p-values are true (and not the result of selection bias, etc.). Take a hundred papers which claim results at p=0.05. At the asymptote about 95 of them will turn out to be correct and about 5 will turn out to be false. By your strict criteria you’re rejecting all of them—you’re rejecting 95 correct papers. There is a cost to that, is there not?
Lumifer, please update that at this moment you don’t grok the difference between “A ⇒ B (p=0.05)” and “B ⇒ A (p = 0.05)”, which is why you don’t understand what p-value really means, which is why you don’t understand the difference between selection bias and base rate neglect, which is probably why the emphasis on using Bayes theorem in scientific process does not make sense to you. You made a mistake, that happens to all of us. Just stop it already, please.
And don’t feel bad about it. Until recently I didn’t understand it too, and I had a gold medal from international mathematical olympiad. Somehow it is not explained correctly at most schools, perhaps because the teachers don’t get it themselves, or maybe they just underestimate the difficulty of proper understanding and the high chance of getting it wrong. So please don’t contibute to the confusion.
Imagine that there are 1000 possible hypotheses, among which 999 are wrong, and 1 is correct. (That’s just a random example to illustrate the concept. The numbers in real life can be different.) You have an experiment that says “yes” to 5% of the wrong hypotheses (this is what p=0.05 means), and also to the correct hypothesis. So at the end, you have 50 wrong hypotheses and 1 correct hypothesis confirmed by the experiment. So in the journal, 98% of the published articles would be wrong, not 5%. It is “wrong ⇒ confirmed (p=0.05)”, not “confirmed ⇒ wrong (p=0.05)”.
LOL. Yeah, yeah, mea culpa, I had a brain fart and expressed myself very poorly.
I do understand what p-value really means. The issue was that I had in mind a specific scenario (where in effect you’re trying to see if the difference in means between two groups is significant) but neglected to mention it in the post :-)
Lumifer, please update that at this moment you don’t grok the difference between “A ⇒ B (p=0.05)” and “B ⇒ A (p = 0.05)”, which is why you don’t understand what p-value really means, which is why you don’t understand the difference between selection bias and base rate neglect, which is probably why the emphasis on using Bayes theorem in scientific process does not make sense to you. You made a mistake, that happens to all of us. Just stop it already, please.
I feel like this could use a bit longer explanation, especially since I think you’re not hearing Lumifer’s point, so let me give it a shot. (I’m not sure a see a meaningful difference between base rate neglect and selection bias in this circumstance.)
The word “grok” in Viliam_Bur’s comment is really important. This part of the grandparent is true:
Assume that the reported p-values are true (and not the result of selection bias, etc.). Take a hundred papers which claim results at p=0.05. At the asymptote about 95 of them will turn out to be correct and about 5 will turn out to be false.
But it’s like saying “well, assume the diagnosis is correct. Then the treatment will make the patient better with high probability.” While true, it’s totally out of touch with reality- we can’t assume the diagnosis is correct, and a huge part of being a doctor is responding correctly to that uncertainty.
Earlier, Lumifer said this, which is an almost correct explanation of using Bayes in this situation:
But that all is fine—the readers of scientific papers are expected to understand that results significant to p=0.05 will be wrong around 5% of the times, more or less (not exactly because the usual test measures P(D|H), the probability of the observed data given the (null) hypothesis while you really want P(H|D), the probability of the hypothesis given the data).
The part that makes it the “almost” is the “5% of the times, more or less.” This implies that it’s centered around 5%, with random chance determining what this instance is. But selection bias means it will almost certainly be more, and generally much more. In fields that study phenomena that don’t exist, 100% of the papers published will be of false results that were significant by chance. In many real fields, rates of failure to replicate are around 30%. Describing 30% as “5%, more or less” seems odd, to say the least.
But the proposal to reduce the p value doesn’t solve the underlying problem (which was Lumifer’s response). If we set the p value threshold lower, at .01 or .001 or wherever, we reducing the risk of false positives at the cost of increasing the risk of false negatives. A study design which needs to determine an effect at the .001 level is much more expensive than a study design which needs to determine an effect at the .05 level, and so we will have many less studies attempted, and many many less published studies.
Better to drop p entirely. Notice that stricter p thresholds go in the opposite direction as the publication of negative results, which is the real solution to the problem of selection bias. By calling for stricter p thresholds, you implicitly assume that p is a worthwhile metric, when what we really want is publication of negative results and more replications.
But it’s like saying “well, assume the diagnosis is correct. Then the treatment will make the patient better with high probability.” While true, it’s totally out of touch with reality
My grandparent post was stupid, but what I had in mind was basically a stage-2 (or −3) drug trial situation. You have declared (at least to the FDA) that you’re running a trial, so selection bias does not apply at this stage. You have two groups, one receives the experimental drug, one receives a placebo. Assume a double-blind randomized scenario and assume there is a measurable metric of improvement at the end of the trial.
After the trial you have two groups with two empirical distributions of the metric of choice. The question is how confident you are that these two distributions are different.
Better to drop p entirely.
Well, as usual it’s complicated. Yes, the p-test is suboptimal in most situations where it’s used in reality. However it fulfils a need and if you drop the test entirely you need a replacement for the need won’t go away.
Assume that the reported p-values are true (and not the result of selection bias, etc.). Take a hundred papers which claim results at p=0.05. At the asymptote about 95 of them will turn out to be correct...
That’s not how p-values work. p=0.05 doesn’t mean that the hypothesis is 95% likely to be correct, even in principle; it means that there’s a 5% chance of seeing the same correlation if the null hypothesis is true. Pull a hundred independent data sets and we’d normally expect to find a p=0.05 correlation or better in at least five or so of them, no matter whether we’re testing, say, an association of cancer risk with smoking or with overuse of the word “muskellunge”.
This distinction’s especially important to keep in mind in an environment where running replications is relatively low-status or where negative results tend to be quietly shelved—both of which, as it happens, hold true in large chunks of academia. But even if this weren’t the case, we’d normally expect replication rates to be less than one minus the claimed p-value, simply because there are many more promising ideas than true ones and some of those will turn up false positives.
Take a hundred papers which claim results at p=0.05. At the asymptote about 95 of them will turn out to be correct and about 5 will turn out to be false.
No, they won’t. You’re committing base rate neglect. It’s entirely possible for people to publish 2000 papers in a field where there’s no hope of finding a true result, and get 100 false results with p 0.05).
Bayesian theorem is just one of many mathematical equations, like for example Pythagorean theorem. There is inherently nothing magical about it.
It just happens to explain one problem with the current scientific publishing process: neglecting base rates. Which sometimes seems like this: “I designed an experiment that would prove a false hypothesis only with probability p = 0.05. My experiment has succeeded. Please publish my paper in your journal!”
(I guess I am exaggerating a bit here, but many people ‘doing science’ would not understand immediately what is wrong with this. And that would be those who even bother to calculate the p-value. Not everyone who is employed as a scientist is necessarily good at math. Many people get paid for doing bad science.)
This kind of thinking has the following problem: Even if you invent hundred completely stupid hypotheses; if you design experiments that would prove a false hypothesis only with p = 0.05, that means five of them would be proved by the experiment. If you show someone else all hundred experiments together, they may understand what is wrong. But you are more likely to send only the successful five ones to the journal, aren’t you? -- But how exactly is the journal supposed to react to this? Should they ask: “Did you do many other experiments, even ones completely irrelevant to this specific hypothesis? Because, you know, that somehow undermines the credibility of this one.”
The current scientific publishing process has a bias. Bayesian theorem explains it. We care about science, and we care about science being done correctly.
That’s not neglecting base rates, that’s called selection bias combined with incentives to publish. Bayes theorem isn’t going to help you with this.
http://xkcd.com/882/
Uhm, it’s similar, but not the same.
If I understand it correctly, selection bias is when 20 researchers make an experiment with green jelly beans, 19 of them don’t find significant correlation, 1 of them finds it… and only the 1 publishes, and the 19 don’t. The essence is that we had 19 pieces of evidence against the green jelly beans, only 1 piece of evidence for the green jelly beans, but we don’t see those 19 pieces, because they are not published. Selection = “there is X and Y, but we don’t see Y, because it was filtered out by the process that gives us information”.
But imagine that you are the first researcher ever who has researched the jelly beans. And you only did one experiment. And it happened to succeed. Where is the selection here? (Perhaps selection across Everett branches or Tegmark universes. But we can’t blame the scientific publishing process for not giving us information from the parallel universes, can we?)
In this case, base rate neglect means ignoring the fact that “if you take a random thing, the probability that this specific thing causes acne is very low”. Therefore, even if the experiment shows a connection with p = 0.05, it’s still more likely that the result just happened randomly.
The proper reasoning could be something like this (all number pulled out of the hat) -- we already have pretty strong evidence that acne is caused by food; let’s say there is a 50% probability for this. With enough specificity (giving each fruit a different category, etc.), there are maybe 2000 categories of food. It is possible that more then one of them cause acne, and our probability distribution for that is… something. Considering all this information, we estimate a prior probability let’s say 0.0004 that a random food causes acne. -- Which means that if the correlation is significant on level p = 0.05, that per se means almost nothing. (Here one could use the Bayes theorem to calculate that the p = 0.05 successful experiment shows the true cause of acne with probablity cca 1%.) We need to increase it to p = 0.0004 just to get a 50% chance of being right. How can we do that? We should use a much larger sample, or we should repeat the experiment many times, record all the successed and failures, and do a meta-analysis.
That’s a different case—you have no selection bias here, but your conclusions are still uncertain—if you pick p=0.05 as your threshold, you’re clearly accepting that there is a 5% chance of a Type I error: the green jelly beans did nothing, but the noise happened to be such that you interpreted it as conclusive evidence in favor of your hypothesis.
But that all is fine—the readers of scientific papers are expected to understand that results significant to p=0.05 will be wrong around 5% of the times, more or less (not exactly because the usual test measures P(D|H), the probability of the observed data given the (null) hypothesis while you really want P(H|D), the probability of the hypothesis given the data).
People rarely take entirely random things and test them for causal connection to acne. Notice how you had to do a great deal of handwaving in establishing your prior (aka the base rate).
As an exercise, try to be specific. For example, let’s say I want to check if the tincture made from the bark of a certain tree helps with acne. How would I go about calculating my base rate / prior? Can you walk me through an estimation which will end with a specific number?
And this is the base rate neglect. It’s not “results significant to p=0.05 will be wrong about 5% of time”. It’s “wrong results will be significant to p=0.05 about 5% of time”. And most people will confuse these two things.
It’s like when people confuse “A ⇒ B” with “B ⇒ A”, only this time it is “A ⇒ B (p=0.05)” with “B ⇒ A (p=0.05)”. It is “if wrong, then in 5% significant”. It is not “if significant, then in 5% wrong”.
Yes, you are right. Establishing the prior is pretty difficult, perhaps impossible. (But that does not make “A ⇒ B” equal to “B ⇒ A”.) Probably the reasonable thing to do would be simply to impose strict limits in areas where many results were proved wrong.
Um, what “strict limits” are you talking about, what will they look like, and who will be doing the imposing?
To get back to my example, let’s say I’m running experiments to check if the tincture made from the bark of a certain tree helps with acne—what strict limits would you like?
p = 0.001, and if at the end of the year too many researches fail to replicate, keep decreasing. (let’s say that “fail to replicate” in this context means that the replication attempt cannot prove it even with p = 0.05 -- we don’t want to make replications too expensive, just a simple sanity check)
a long answer would involve a lot of handwaving again (it depends on why do you believe the bark is helpful; in other words, what other evidence do you already have)
a short answer: for example, p = 0.001
Well, and what’s magical about this particular number? Why not p=0.01? why not p=0.0001? Confidence thresholds are arbitrary, do you have a compelling argument why any particular one is better than the rest?
Besides, you’re forgetting the costs. Assume that the reported p-values are true (and not the result of selection bias, etc.). Take a hundred papers which claim results at p=0.05. At the asymptote about 95 of them will turn out to be correct and about 5 will turn out to be false. By your strict criteria you’re rejecting all of them—you’re rejecting 95 correct papers. There is a cost to that, is there not?
Lumifer, please update that at this moment you don’t grok the difference between “A ⇒ B (p=0.05)” and “B ⇒ A (p = 0.05)”, which is why you don’t understand what p-value really means, which is why you don’t understand the difference between selection bias and base rate neglect, which is probably why the emphasis on using Bayes theorem in scientific process does not make sense to you. You made a mistake, that happens to all of us. Just stop it already, please.
And don’t feel bad about it. Until recently I didn’t understand it too, and I had a gold medal from international mathematical olympiad. Somehow it is not explained correctly at most schools, perhaps because the teachers don’t get it themselves, or maybe they just underestimate the difficulty of proper understanding and the high chance of getting it wrong. So please don’t contibute to the confusion.
Imagine that there are 1000 possible hypotheses, among which 999 are wrong, and 1 is correct. (That’s just a random example to illustrate the concept. The numbers in real life can be different.) You have an experiment that says “yes” to 5% of the wrong hypotheses (this is what p=0.05 means), and also to the correct hypothesis. So at the end, you have 50 wrong hypotheses and 1 correct hypothesis confirmed by the experiment. So in the journal, 98% of the published articles would be wrong, not 5%. It is “wrong ⇒ confirmed (p=0.05)”, not “confirmed ⇒ wrong (p=0.05)”.
LOL. Yeah, yeah, mea culpa, I had a brain fart and expressed myself very poorly.
I do understand what p-value really means. The issue was that I had in mind a specific scenario (where in effect you’re trying to see if the difference in means between two groups is significant) but neglected to mention it in the post :-)
I feel like this could use a bit longer explanation, especially since I think you’re not hearing Lumifer’s point, so let me give it a shot. (I’m not sure a see a meaningful difference between base rate neglect and selection bias in this circumstance.)
The word “grok” in Viliam_Bur’s comment is really important. This part of the grandparent is true:
But it’s like saying “well, assume the diagnosis is correct. Then the treatment will make the patient better with high probability.” While true, it’s totally out of touch with reality- we can’t assume the diagnosis is correct, and a huge part of being a doctor is responding correctly to that uncertainty.
Earlier, Lumifer said this, which is an almost correct explanation of using Bayes in this situation:
The part that makes it the “almost” is the “5% of the times, more or less.” This implies that it’s centered around 5%, with random chance determining what this instance is. But selection bias means it will almost certainly be more, and generally much more. In fields that study phenomena that don’t exist, 100% of the papers published will be of false results that were significant by chance. In many real fields, rates of failure to replicate are around 30%. Describing 30% as “5%, more or less” seems odd, to say the least.
But the proposal to reduce the p value doesn’t solve the underlying problem (which was Lumifer’s response). If we set the p value threshold lower, at .01 or .001 or wherever, we reducing the risk of false positives at the cost of increasing the risk of false negatives. A study design which needs to determine an effect at the .001 level is much more expensive than a study design which needs to determine an effect at the .05 level, and so we will have many less studies attempted, and many many less published studies.
Better to drop p entirely. Notice that stricter p thresholds go in the opposite direction as the publication of negative results, which is the real solution to the problem of selection bias. By calling for stricter p thresholds, you implicitly assume that p is a worthwhile metric, when what we really want is publication of negative results and more replications.
My grandparent post was stupid, but what I had in mind was basically a stage-2 (or −3) drug trial situation. You have declared (at least to the FDA) that you’re running a trial, so selection bias does not apply at this stage. You have two groups, one receives the experimental drug, one receives a placebo. Assume a double-blind randomized scenario and assume there is a measurable metric of improvement at the end of the trial.
After the trial you have two groups with two empirical distributions of the metric of choice. The question is how confident you are that these two distributions are different.
Well, as usual it’s complicated. Yes, the p-test is suboptimal in most situations where it’s used in reality. However it fulfils a need and if you drop the test entirely you need a replacement for the need won’t go away.
That’s not how p-values work. p=0.05 doesn’t mean that the hypothesis is 95% likely to be correct, even in principle; it means that there’s a 5% chance of seeing the same correlation if the null hypothesis is true. Pull a hundred independent data sets and we’d normally expect to find a p=0.05 correlation or better in at least five or so of them, no matter whether we’re testing, say, an association of cancer risk with smoking or with overuse of the word “muskellunge”.
This distinction’s especially important to keep in mind in an environment where running replications is relatively low-status or where negative results tend to be quietly shelved—both of which, as it happens, hold true in large chunks of academia. But even if this weren’t the case, we’d normally expect replication rates to be less than one minus the claimed p-value, simply because there are many more promising ideas than true ones and some of those will turn up false positives.
No, they won’t. You’re committing base rate neglect. It’s entirely possible for people to publish 2000 papers in a field where there’s no hope of finding a true result, and get 100 false results with p 0.05).