Ok, after reading these, it’s sounding a lot more like the main problem is causal inference people not being very steeped in Bayesian inference. Robins, Hernan and Wasserman’s argument is based on a mistake that took all of ten minutes to spot: they show that a particular quantity is independent of propensity score function if the true parameters of the model are known, then jump to the estimate of that quantity being independent of propensity—when in fact, the estimate is dependent on the propensity, because the estimates of the model parameters depend on propensity. Pearl’s argument is more abstract and IMO stronger, but is based on the idea that causal relationships are not statistically testable… when in fact, that’s basically the bread-and-butter use-case for Bayesian model comparison.
Some time in the next week I’ll write up a post with a few full examples (including the one from Robins, Hernan and Wasserman), and explain in a bit more detail.
(Side note: I suspect that the reason smart people have had so much trouble here is that the previous generation was mostly introduced to Bayesian statistics by Savage or Gelman; I expect someone who started with Jaynes would have a lot less trouble here, but his main textbook is relatively recent.)
Some time in the next week I’ll write up a post with a few full examples (including the one from Robins, Hernan and Wasserman), and explain in a bit more detail.
I look forward to reading it. To be honest: Knowing these authors, I’d be surprised if you have found an error that breaks their argument.
We are now discussing questions that are so far outside of my expertise that I do not have the ability to independently evaluate the arguments, so I am unlikely to contribute further to this particular subthread (i.e. to the discussion about whether there exists an obvious and superior Bayesian solution to the problem I am trying to solve).
Ok, after reading these, it’s sounding a lot more like the main problem is causal inference people not being very steeped in Bayesian inference. Robins, Hernan and Wasserman’s argument is based on a mistake that took all of ten minutes to spot: they show that a particular quantity is independent of propensity score function if the true parameters of the model are known, then jump to the estimate of that quantity being independent of propensity—when in fact, the estimate is dependent on the propensity, because the estimates of the model parameters depend on propensity. Pearl’s argument is more abstract and IMO stronger, but is based on the idea that causal relationships are not statistically testable… when in fact, that’s basically the bread-and-butter use-case for Bayesian model comparison.
Some time in the next week I’ll write up a post with a few full examples (including the one from Robins, Hernan and Wasserman), and explain in a bit more detail.
(Side note: I suspect that the reason smart people have had so much trouble here is that the previous generation was mostly introduced to Bayesian statistics by Savage or Gelman; I expect someone who started with Jaynes would have a lot less trouble here, but his main textbook is relatively recent.)
I look forward to reading it. To be honest: Knowing these authors, I’d be surprised if you have found an error that breaks their argument.
We are now discussing questions that are so far outside of my expertise that I do not have the ability to independently evaluate the arguments, so I am unlikely to contribute further to this particular subthread (i.e. to the discussion about whether there exists an obvious and superior Bayesian solution to the problem I am trying to solve).