Well, the blatant version would be to take 5 possible control variables and try all 32 possible omissions and inclusions to see if any of the combinations turns up “statistically significant”. This might look a little suspicious if you collected the data and then threw some of it away. If you were running regressions on an existing database with lots of potential control variables, why, they’ll just have to trust that you never secretly picked and chose.
Someone who did that might not be able to convince themselves they weren’t cheating… but someone who, somehow or other, got an idea of which variables would be most convenient to control for, might well find themselves influenced just a bit in that direction.
I don’t see how being a Bayesian gets you out of cherry-picking your causal structure from a large set. You still have to decide which variables are conditional on which other variables.
You put in all the variables, use a hierarchical structure for the prior, use a weakly informative hyperprior, and let the data sort itself out if it can. Key phrase: automatic relevance determination; David MacKay originated the term while doing Bayesian inference for neural nets.
Can you elaborate on that?
Well, the blatant version would be to take 5 possible control variables and try all 32 possible omissions and inclusions to see if any of the combinations turns up “statistically significant”. This might look a little suspicious if you collected the data and then threw some of it away. If you were running regressions on an existing database with lots of potential control variables, why, they’ll just have to trust that you never secretly picked and chose.
Someone who did that might not be able to convince themselves they weren’t cheating… but someone who, somehow or other, got an idea of which variables would be most convenient to control for, might well find themselves influenced just a bit in that direction.
I don’t see how being a Bayesian gets you out of cherry-picking your causal structure from a large set. You still have to decide which variables are conditional on which other variables.
You put in all the variables, use a hierarchical structure for the prior, use a weakly informative hyperprior, and let the data sort itself out if it can. Key phrase: automatic relevance determination; David MacKay originated the term while doing Bayesian inference for neural nets.
Is that a ‘were not’?