There isn’t much material here on the problem of multiple comparisons. This is something that humans routinely stumble over, while for an ideal Bayesian it wouldn’t even be a problem requiring a solution (much like e.g. confirmation bias). The post would describe the multiple comparisons problem, explain why it’s a non-issue for Bayesians, and look into plausible candidates for the psychological mechanisms that give rise to it (hindsight bias, privileging the hypothesis, base-rate neglect; any others?).
Reply here if you are (actually) starting to work on this.
I’d love to see a post on this, ideally with R code. In particular, I need to know about this because I’m running a big sleep experiment with 5 separate interventions, each with multiple endpoints. You can see the problem.
I’ve done multiple correction of p-values with my previous frequentist analyses with the same problem of multiple endpoints, but I’d rather do a Bayesian analysis; however, I don’t know how to do multiple correction with Bayesian results. Reading, a Gelman paper tells me that I don’t need to because if I’m doing hierarchical models, probability mass gets automatically reallocated across models and obviates the need for correction—whatever that means, not that I know hierarchical models either!
My idea was less about statistical practice than about very simple toy models illustrating some general points (in particular, if you write down your priors beforehand and use likelihood ratios, you can do as many comparisons as you like, without any ‘adjustments’; the reason multiple comparisons are suspect in practice has to do with human biases and the circumstances under which scientists will engage in this sort of data mining). I’ve since read a paper that makes pretty much the same theoretical points, although it overstates their practical significance.
Possible idea for a post:
There isn’t much material here on the problem of multiple comparisons. This is something that humans routinely stumble over, while for an ideal Bayesian it wouldn’t even be a problem requiring a solution (much like e.g. confirmation bias). The post would describe the multiple comparisons problem, explain why it’s a non-issue for Bayesians, and look into plausible candidates for the psychological mechanisms that give rise to it (hindsight bias, privileging the hypothesis, base-rate neglect; any others?).
Reply here if you are (actually) starting to work on this.
I’d love to see a post on this, ideally with R code. In particular, I need to know about this because I’m running a big sleep experiment with 5 separate interventions, each with multiple endpoints. You can see the problem.
I’ve done multiple correction of p-values with my previous frequentist analyses with the same problem of multiple endpoints, but I’d rather do a Bayesian analysis; however, I don’t know how to do multiple correction with Bayesian results. Reading, a Gelman paper tells me that I don’t need to because if I’m doing hierarchical models, probability mass gets automatically reallocated across models and obviates the need for correction—whatever that means, not that I know hierarchical models either!
My idea was less about statistical practice than about very simple toy models illustrating some general points (in particular, if you write down your priors beforehand and use likelihood ratios, you can do as many comparisons as you like, without any ‘adjustments’; the reason multiple comparisons are suspect in practice has to do with human biases and the circumstances under which scientists will engage in this sort of data mining). I’ve since read a paper that makes pretty much the same theoretical points, although it overstates their practical significance.