what we do is simply calculate P(E|~H) (techniques for doing this being of course the principal concern of statistics texts),
No no no. That would be a hundred times saner than frequentism. What you actually do is take the real data e-12 and put it into a giant bin E that also contains e-1, e-3, and whatever else you can make up a plausible excuse to include or exclude, and then you calculate P(E|~H). This is one of the key points of flexibility that enables frequentists to get whatever answer they like, the other being the choice of control variables in multivariate analyses.
See e.g. this part of the article:
The authors used what’s called a Mann-Whitney U test, which, in simplified terms, aims to determine if two sets of data come from different distributions. The essential thing to know about this test is that it doesn’t depend on the actual data except insofar as those data determine the ranks of the data points when the two data sets are combined. That is, it throws away most of the data, in the sense that data sets that generate the same ranking are equivalent under the test.
This seems to use “frequentist” to mean “as statistics are actually practiced.” It is unreasonable to compare the implementation of A to the ideal form of B. In particular, the problem of the Mann-Whitney test seem to me that the authors looked up a recipe in a cookbook without understanding it, which they could have done just as easily in a bayesian cookbook.
Well, the blatant version would be to take 5 possible control variables and try all 32 possible omissions and inclusions to see if any of the combinations turns up “statistically significant”. This might look a little suspicious if you collected the data and then threw some of it away. If you were running regressions on an existing database with lots of potential control variables, why, they’ll just have to trust that you never secretly picked and chose.
Someone who did that might not be able to convince themselves they weren’t cheating… but someone who, somehow or other, got an idea of which variables would be most convenient to control for, might well find themselves influenced just a bit in that direction.
I don’t see how being a Bayesian gets you out of cherry-picking your causal structure from a large set. You still have to decide which variables are conditional on which other variables.
You put in all the variables, use a hierarchical structure for the prior, use a weakly informative hyperprior, and let the data sort itself out if it can. Key phrase: automatic relevance determination; David MacKay originated the term while doing Bayesian inference for neural nets.
No no no. That would be a hundred times saner than frequentism. What you actually do is take the real data e-12 and put it into a giant bin E that also contains e-1, e-3, and whatever else you can make up a plausible excuse to include or exclude, and then you calculate P(E|~H). This is one of the key points of flexibility that enables frequentists to get whatever answer they like, the other being the choice of control variables in multivariate analyses.
See e.g. this part of the article:
This seems to use “frequentist” to mean “as statistics are actually practiced.” It is unreasonable to compare the implementation of A to the ideal form of B. In particular, the problem of the Mann-Whitney test seem to me that the authors looked up a recipe in a cookbook without understanding it, which they could have done just as easily in a bayesian cookbook.
Can you elaborate on that?
Well, the blatant version would be to take 5 possible control variables and try all 32 possible omissions and inclusions to see if any of the combinations turns up “statistically significant”. This might look a little suspicious if you collected the data and then threw some of it away. If you were running regressions on an existing database with lots of potential control variables, why, they’ll just have to trust that you never secretly picked and chose.
Someone who did that might not be able to convince themselves they weren’t cheating… but someone who, somehow or other, got an idea of which variables would be most convenient to control for, might well find themselves influenced just a bit in that direction.
I don’t see how being a Bayesian gets you out of cherry-picking your causal structure from a large set. You still have to decide which variables are conditional on which other variables.
You put in all the variables, use a hierarchical structure for the prior, use a weakly informative hyperprior, and let the data sort itself out if it can. Key phrase: automatic relevance determination; David MacKay originated the term while doing Bayesian inference for neural nets.
Is that a ‘were not’?