Forgive my confusion, I’m a bad statistician, of any sort. How do you include ‘something else’ in your model? Don’t you need to at least (for monte carlo techniques) be able to generate “forward” from parameters to simulated data?
Or do you include Gelman’s posterior predictive check in the model somehow, so that data that is sufficiently surprising causes a “misspecification alarm” to go off?
I’m not sure how the best way to handle simplifying a model without doing insane things. I do know that if what you are doing amounts to overtly “putting zero probability on it” then what you are doing is a terminal mistake that makes the process distinctly non-bayesian. I get the impression that the mistakes that bayesians are trying to correct with their after the fact testing of the model are different ones to this one. If common ‘bayesian statisticians’ do in fact make mistakes that are of this order then consider me mistaken but also consider their claims to be ‘bayesians’ also, more or less, lies.
I get the impression that the mistakes that bayesians are trying to correct with their after the fact testing of the model are different ones to this one.
If you choose a single model to work with, you are effectively putting zero probability on all other models (that are not contained in your chosen model as sub-models). Gelman’s posterior predictive checks aren’t motivated by this consideration (one of his non-mainstream-for-a-Bayesian stances is that model probabilities aren’t useful). Nevertheless, the checks are directed at identifying ways in which the model fits the data poorly, with an eye to guiding further model elaboration, so they do address this issue in a sense.
“putting zero probability on it”… is a terminal mistake that makes the process distinctly non-bayesian.
Philosophically this is true, but practically speaking, it’s not. Setting certain posterior probabilities to zero can be a good approximation to a fully Bayesian analysis (e.g., this paper). In fact, if it’s appropriate to use a small number of sigfigs in your results, this approximation can yield the exact same results far faster. I don’t think it’s fair to call the labeling of such an analysis as Bayesian a lie.
If you choose a single model to work with, you are effectively putting zero probability on all other models (that are not contained in your chosen model as sub-models).
I follow this reasoning and it applies in many cases. The reason I do not consider it applicable to the example given is due to the explicit mentioning of “We can make Z’s pre-evidence probability arbitrarily small, to make this seem reasonable at the time.” That changes the meaning of the example significantly in my understanding.
I claim that if Z is given enough consideration that ‘arbitrarily small’ is plugged in rather than mere exclusion from a model then it is just an error not an approximation. There are valid examples of bayes-in-practice that support the position John takes but I just don’t consider this example a fair representation. Partly because the mistake is a bad way to handle urns and partly because explicitly plugging in bad priors for Z should make you explicitly expect bad posteriors for Z. Exclusion from the model itself is a different problem.
Good answer. I got a bit confused because Z has two meanings: “ball labelled Z was observed” (data), and “ball came from urn Z” (hypothesis). John’s model can assign zero probability to data than could possibly be observed, and that’s the big no-no.
How do you include ‘something else’ in your model? Don’t you need to at least (for monte carlo techniques) be able to generate “forward” from parameters to simulated data?
In the example provided it would be by having the labels “A, B, C and Zooblefuzz” where Zooblefuzz is clearly defined ‘any other urn than A, B or C’.
Forgive my confusion, I’m a bad statistician, of any sort. How do you include ‘something else’ in your model? Don’t you need to at least (for monte carlo techniques) be able to generate “forward” from parameters to simulated data?
Or do you include Gelman’s posterior predictive check in the model somehow, so that data that is sufficiently surprising causes a “misspecification alarm” to go off?
I’m not sure how the best way to handle simplifying a model without doing insane things. I do know that if what you are doing amounts to overtly “putting zero probability on it” then what you are doing is a terminal mistake that makes the process distinctly non-bayesian. I get the impression that the mistakes that bayesians are trying to correct with their after the fact testing of the model are different ones to this one. If common ‘bayesian statisticians’ do in fact make mistakes that are of this order then consider me mistaken but also consider their claims to be ‘bayesians’ also, more or less, lies.
If you choose a single model to work with, you are effectively putting zero probability on all other models (that are not contained in your chosen model as sub-models). Gelman’s posterior predictive checks aren’t motivated by this consideration (one of his non-mainstream-for-a-Bayesian stances is that model probabilities aren’t useful). Nevertheless, the checks are directed at identifying ways in which the model fits the data poorly, with an eye to guiding further model elaboration, so they do address this issue in a sense.
Philosophically this is true, but practically speaking, it’s not. Setting certain posterior probabilities to zero can be a good approximation to a fully Bayesian analysis (e.g., this paper). In fact, if it’s appropriate to use a small number of sigfigs in your results, this approximation can yield the exact same results far faster. I don’t think it’s fair to call the labeling of such an analysis as Bayesian a lie.
I follow this reasoning and it applies in many cases. The reason I do not consider it applicable to the example given is due to the explicit mentioning of “We can make Z’s pre-evidence probability arbitrarily small, to make this seem reasonable at the time.” That changes the meaning of the example significantly in my understanding.
I claim that if Z is given enough consideration that ‘arbitrarily small’ is plugged in rather than mere exclusion from a model then it is just an error not an approximation. There are valid examples of bayes-in-practice that support the position John takes but I just don’t consider this example a fair representation. Partly because the mistake is a bad way to handle urns and partly because explicitly plugging in bad priors for Z should make you explicitly expect bad posteriors for Z. Exclusion from the model itself is a different problem.
Good answer. I neglected to read up-thread with enough thoroughness.
Good answer. I got a bit confused because Z has two meanings: “ball labelled Z was observed” (data), and “ball came from urn Z” (hypothesis). John’s model can assign zero probability to data than could possibly be observed, and that’s the big no-no.
In the example provided it would be by having the labels “A, B, C and Zooblefuzz” where Zooblefuzz is clearly defined ‘any other urn than A, B or C’.