There are rules for how to do arithmetic. If you want to get the right answer, you have to follow them. So, when adding 18 and 17, you can’t just decide that you don’t like to carry 1s today, and hence compute that 18+17=25.
Similarly, there are rules for how to do Bayesian probability calculations. If you want to get the right answer, you have to follow them. One of the rules is that the posterior probability of something is found by conditioning on all the data you have. If you do a clinical trial with 1000 subjects, you can’t just decide that you’d like to compute the posterior probability that the treatment works by conditioning on the data for just the first 700.
If you’ve seen the output of a random number generator, and are using this to compute a posterior probability, you condition on the actual number observed, say 71. You do not condition on any of the other events you mention, because they are less informative than the actual number—conditioning on them would amount to ignoring part of the data. (In some circumstances, the result of conditioning on all the data may be the same as the result of conditioning on some function of the data—when that function is a “sufficient statistic”, but it’s always correct to condition on all the data.)
This is absolutely standard Bayesian procedure. There is nothing in the least bit controversial about it. (That is, it is definitely how Bayesian inference works—there are of course some people who don’t accept that Bayesian inference is the right thing to do.)
Similarly, there are certain rules for how to apply decision theory to choose an action to maximize your expected utility, based on probability judgements that you’ve made.
If you compute probabilities incorrectly, and then incorrectly apply decision theory to choose an action based on these incorrect probabilities, it is possible that your two errors will cancel out. That is actually rather likely if you have other ways of telling what the right answer is, and hence have the opportunity to make ad hoc (incorrect) alterations to how you apply decision theory in order to get the right decision with the wrong probabilities.
If you’d like to outline some specific betting scenario for Sleeping Beauty, I’ll show you how applying decision theory correctly produces the right action only if Beauty judges the probability of Heads to be 1⁄3.
There are rules for how to do arithmetic. If you want to get the right answer, you have to follow them. So, when adding 18 and 17, you can’t just decide that you don’t like to carry 1s today, and hence compute that 18+17=25.
Similarly, there are rules for how to do Bayesian probability calculations. If you want to get the right answer, you have to follow them. One of the rules is that the posterior probability of something is found by conditioning on all the data you have. If you do a clinical trial with 1000 subjects, you can’t just decide that you’d like to compute the posterior probability that the treatment works by conditioning on the data for just the first 700.
If you’ve seen the output of a random number generator, and are using this to compute a posterior probability, you condition on the actual number observed, say 71. You do not condition on any of the other events you mention, because they are less informative than the actual number—conditioning on them would amount to ignoring part of the data. (In some circumstances, the result of conditioning on all the data may be the same as the result of conditioning on some function of the data—when that function is a “sufficient statistic”, but it’s always correct to condition on all the data.)
This is absolutely standard Bayesian procedure. There is nothing in the least bit controversial about it. (That is, it is definitely how Bayesian inference works—there are of course some people who don’t accept that Bayesian inference is the right thing to do.)
Similarly, there are certain rules for how to apply decision theory to choose an action to maximize your expected utility, based on probability judgements that you’ve made.
If you compute probabilities incorrectly, and then incorrectly apply decision theory to choose an action based on these incorrect probabilities, it is possible that your two errors will cancel out. That is actually rather likely if you have other ways of telling what the right answer is, and hence have the opportunity to make ad hoc (incorrect) alterations to how you apply decision theory in order to get the right decision with the wrong probabilities.
If you’d like to outline some specific betting scenario for Sleeping Beauty, I’ll show you how applying decision theory correctly produces the right action only if Beauty judges the probability of Heads to be 1⁄3.