Suppose (because you’re a computationally-limited Bayesian) that you only include in your model the N highest-probability hypotheses. That is, you include A, B, C, in your model, but you neglect Z—that is, you put zero probability on it. (We can make Z’s pre-evidence probability arbitrarily small, to make this seem reasonable at the time.) When one, or even N balls turn out to be labeled Z, the model (due to the initial zero probability on Z) continues insisting that the balls came from one of the initially-specified hypotheses.
That isn’t just a computational limitation. It’s an outright bug. Something that assigns 0 to Z is just not even an approximation of a Bayesian. A sane agent with limited resources may, for example, assign a probability to “A,B,C and ‘something else’”. If it explicitly assigned an (arbitrarily close to) 0 to Z then it just fails at life.
Hi. I found the paper containing the example in question—it’s Bayesians sometimes cannot ignore even very implausible theories. I don’t understand everything in the paper, but it seems like they’ve anticipated your objection and have another example which explicitly includes a “Something else” case.
Forgive my confusion, I’m a bad statistician, of any sort. How do you include ‘something else’ in your model? Don’t you need to at least (for monte carlo techniques) be able to generate “forward” from parameters to simulated data?
Or do you include Gelman’s posterior predictive check in the model somehow, so that data that is sufficiently surprising causes a “misspecification alarm” to go off?
I’m not sure how the best way to handle simplifying a model without doing insane things. I do know that if what you are doing amounts to overtly “putting zero probability on it” then what you are doing is a terminal mistake that makes the process distinctly non-bayesian. I get the impression that the mistakes that bayesians are trying to correct with their after the fact testing of the model are different ones to this one. If common ‘bayesian statisticians’ do in fact make mistakes that are of this order then consider me mistaken but also consider their claims to be ‘bayesians’ also, more or less, lies.
I get the impression that the mistakes that bayesians are trying to correct with their after the fact testing of the model are different ones to this one.
If you choose a single model to work with, you are effectively putting zero probability on all other models (that are not contained in your chosen model as sub-models). Gelman’s posterior predictive checks aren’t motivated by this consideration (one of his non-mainstream-for-a-Bayesian stances is that model probabilities aren’t useful). Nevertheless, the checks are directed at identifying ways in which the model fits the data poorly, with an eye to guiding further model elaboration, so they do address this issue in a sense.
“putting zero probability on it”… is a terminal mistake that makes the process distinctly non-bayesian.
Philosophically this is true, but practically speaking, it’s not. Setting certain posterior probabilities to zero can be a good approximation to a fully Bayesian analysis (e.g., this paper). In fact, if it’s appropriate to use a small number of sigfigs in your results, this approximation can yield the exact same results far faster. I don’t think it’s fair to call the labeling of such an analysis as Bayesian a lie.
If you choose a single model to work with, you are effectively putting zero probability on all other models (that are not contained in your chosen model as sub-models).
I follow this reasoning and it applies in many cases. The reason I do not consider it applicable to the example given is due to the explicit mentioning of “We can make Z’s pre-evidence probability arbitrarily small, to make this seem reasonable at the time.” That changes the meaning of the example significantly in my understanding.
I claim that if Z is given enough consideration that ‘arbitrarily small’ is plugged in rather than mere exclusion from a model then it is just an error not an approximation. There are valid examples of bayes-in-practice that support the position John takes but I just don’t consider this example a fair representation. Partly because the mistake is a bad way to handle urns and partly because explicitly plugging in bad priors for Z should make you explicitly expect bad posteriors for Z. Exclusion from the model itself is a different problem.
Good answer. I got a bit confused because Z has two meanings: “ball labelled Z was observed” (data), and “ball came from urn Z” (hypothesis). John’s model can assign zero probability to data than could possibly be observed, and that’s the big no-no.
How do you include ‘something else’ in your model? Don’t you need to at least (for monte carlo techniques) be able to generate “forward” from parameters to simulated data?
In the example provided it would be by having the labels “A, B, C and Zooblefuzz” where Zooblefuzz is clearly defined ‘any other urn than A, B or C’.
I like your point but not your example.
That isn’t just a computational limitation. It’s an outright bug. Something that assigns 0 to Z is just not even an approximation of a Bayesian. A sane agent with limited resources may, for example, assign a probability to “A,B,C and ‘something else’”. If it explicitly assigned an (arbitrarily close to) 0 to Z then it just fails at life.
Hi. I found the paper containing the example in question—it’s Bayesians sometimes cannot ignore even very implausible theories. I don’t understand everything in the paper, but it seems like they’ve anticipated your objection and have another example which explicitly includes a “Something else” case.
Forgive my confusion, I’m a bad statistician, of any sort. How do you include ‘something else’ in your model? Don’t you need to at least (for monte carlo techniques) be able to generate “forward” from parameters to simulated data?
Or do you include Gelman’s posterior predictive check in the model somehow, so that data that is sufficiently surprising causes a “misspecification alarm” to go off?
I’m not sure how the best way to handle simplifying a model without doing insane things. I do know that if what you are doing amounts to overtly “putting zero probability on it” then what you are doing is a terminal mistake that makes the process distinctly non-bayesian. I get the impression that the mistakes that bayesians are trying to correct with their after the fact testing of the model are different ones to this one. If common ‘bayesian statisticians’ do in fact make mistakes that are of this order then consider me mistaken but also consider their claims to be ‘bayesians’ also, more or less, lies.
If you choose a single model to work with, you are effectively putting zero probability on all other models (that are not contained in your chosen model as sub-models). Gelman’s posterior predictive checks aren’t motivated by this consideration (one of his non-mainstream-for-a-Bayesian stances is that model probabilities aren’t useful). Nevertheless, the checks are directed at identifying ways in which the model fits the data poorly, with an eye to guiding further model elaboration, so they do address this issue in a sense.
Philosophically this is true, but practically speaking, it’s not. Setting certain posterior probabilities to zero can be a good approximation to a fully Bayesian analysis (e.g., this paper). In fact, if it’s appropriate to use a small number of sigfigs in your results, this approximation can yield the exact same results far faster. I don’t think it’s fair to call the labeling of such an analysis as Bayesian a lie.
I follow this reasoning and it applies in many cases. The reason I do not consider it applicable to the example given is due to the explicit mentioning of “We can make Z’s pre-evidence probability arbitrarily small, to make this seem reasonable at the time.” That changes the meaning of the example significantly in my understanding.
I claim that if Z is given enough consideration that ‘arbitrarily small’ is plugged in rather than mere exclusion from a model then it is just an error not an approximation. There are valid examples of bayes-in-practice that support the position John takes but I just don’t consider this example a fair representation. Partly because the mistake is a bad way to handle urns and partly because explicitly plugging in bad priors for Z should make you explicitly expect bad posteriors for Z. Exclusion from the model itself is a different problem.
Good answer. I neglected to read up-thread with enough thoroughness.
Good answer. I got a bit confused because Z has two meanings: “ball labelled Z was observed” (data), and “ball came from urn Z” (hypothesis). John’s model can assign zero probability to data than could possibly be observed, and that’s the big no-no.
In the example provided it would be by having the labels “A, B, C and Zooblefuzz” where Zooblefuzz is clearly defined ‘any other urn than A, B or C’.