If I understand rightly, you’re happy with my values for p(H), p(D) and p(D|H), but you’re not happy with the result. So you’re claiming that a Bayesian reasoner has to abandon Bayes’ Law in order to get the right answer to this problem. (Which is what I pointed out above.)
Is your argument the same as the one made by Bradley Monton? In his paper Sleeping Beauty and the forgetful Bayesian, Monton argues convincingly that a Bayesian reasoner needs to update upon forgetting, but he doesn’t give a rule explaining how to do it.
Naively, I can imagine doing this by putting the reasoner back in the situation before they learned the information they forgot, and then updating forwards again, but omitting the forgotten information. (Monton gives an example on pp. 51–52 where this works.) But I can’t see how to make this work in the Sleeping Beauty case: how do I put Sleeping Beauty back in the state before she learned what day it is?
So I think the onus remains with you to explain the rules for Bayesian forgetting, and how they lead to the answer ⅓ in this case. (If you can do this convincingly, then we can explain the hardness of the Sleeping Beauty problem by pointing out how little-known the rules for Bayesian forgetting are.)
Well, there is not anything wrong with Bayes’ Law. It doesn’t model forgetting—but it doesn’t pretend to. I would not say you have to “abandon” Bayes’ Law to solve the problem. It is just that the problem includes a process (namely: forgetting) that Bayes’ Law makes no attempt to model in the first place. Bayes’ Law works just fine for elements of the problem involving updating based on evidence. What you have to do is not abuse Bayes’ Law—by using it in circumstances for which it was never intended and is not appropriate.
Your opinion that I am under some kind of obligation to provide a lecture on the little-known topic of Bayesian forgetting has been duly noted. Fortunately, people don’t need to know or understand the Bayesian rules of forgetting in order to successfully solve this problem—but it would certainly help if they avoid applying
the Bayes update rule while completely ignoring the whole issue of the effect of drug-induced amnesia—much as Bradley Monton explains.
You’re not obliged to give a lecture. A reference would be ideal.
Appealing to “forgetting” only gives an argument that our reasoning methods are incomplete: it doesn’t argue against ½ or in favour of ⅓. We need to see the rules and the calculation to decide if it settles the matter.
To reiterate, people do not need to know or understand the Bayesian rules of forgetting in order to successfully solve this problem. Nobody used this approach to solving the problem—as far as I am aware—but the vast majority obtained the correct answer nontheless. Correct reasoning is given on http://en.wikipedia.org/wiki/Sleeping_Beauty_problem—and in dozens of prior comments on the subject.
The Wikipedia page explains how a frequentist can get the answer ⅓, but it doesn’t explain how a Bayesian can get that answer. That’s what’s missing.
I’m still hoping for a reference for “the Bayesian rules of forgetting”. If these rules exist, then we can check to see if they give the answer ⅓ in the Sleeping Beauty case. That would go a long way to convincing a naive Bayesian.
I do not think it is missing—since a Bayesian can ask themselves at what odds they would accept a bet on the coin coming up heads—just as easily as any other agent can.
What is missing is an account involving Bayesian forgetting. It’s missing because that is a way of solving the problem which makes little practical sense.
Now, it might be an interesting exercise to explore the rules of Bayesian forgetting—but I don’t think it can be claimed that that is needed to solve this problem—even from a Bayesian perspective. Bayesians have more tools available to them than just Bayes’ Law.
FWIW, Bayesian forgetting looks somewhat managable. Bayes’ Law is a reversible calculation—so you can just un-apply it.
If I understand rightly, you’re happy with my values for p(H), p(D) and p(D|H), but you’re not happy with the result. So you’re claiming that a Bayesian reasoner has to abandon Bayes’ Law in order to get the right answer to this problem. (Which is what I pointed out above.)
Is your argument the same as the one made by Bradley Monton? In his paper Sleeping Beauty and the forgetful Bayesian, Monton argues convincingly that a Bayesian reasoner needs to update upon forgetting, but he doesn’t give a rule explaining how to do it.
Naively, I can imagine doing this by putting the reasoner back in the situation before they learned the information they forgot, and then updating forwards again, but omitting the forgotten information. (Monton gives an example on pp. 51–52 where this works.) But I can’t see how to make this work in the Sleeping Beauty case: how do I put Sleeping Beauty back in the state before she learned what day it is?
So I think the onus remains with you to explain the rules for Bayesian forgetting, and how they lead to the answer ⅓ in this case. (If you can do this convincingly, then we can explain the hardness of the Sleeping Beauty problem by pointing out how little-known the rules for Bayesian forgetting are.)
Well, there is not anything wrong with Bayes’ Law. It doesn’t model forgetting—but it doesn’t pretend to. I would not say you have to “abandon” Bayes’ Law to solve the problem. It is just that the problem includes a process (namely: forgetting) that Bayes’ Law makes no attempt to model in the first place. Bayes’ Law works just fine for elements of the problem involving updating based on evidence. What you have to do is not abuse Bayes’ Law—by using it in circumstances for which it was never intended and is not appropriate.
Your opinion that I am under some kind of obligation to provide a lecture on the little-known topic of Bayesian forgetting has been duly noted. Fortunately, people don’t need to know or understand the Bayesian rules of forgetting in order to successfully solve this problem—but it would certainly help if they avoid applying the Bayes update rule while completely ignoring the whole issue of the effect of drug-induced amnesia—much as Bradley Monton explains.
You’re not obliged to give a lecture. A reference would be ideal.
Appealing to “forgetting” only gives an argument that our reasoning methods are incomplete: it doesn’t argue against ½ or in favour of ⅓. We need to see the rules and the calculation to decide if it settles the matter.
To reiterate, people do not need to know or understand the Bayesian rules of forgetting in order to successfully solve this problem. Nobody used this approach to solving the problem—as far as I am aware—but the vast majority obtained the correct answer nontheless. Correct reasoning is given on http://en.wikipedia.org/wiki/Sleeping_Beauty_problem—and in dozens of prior comments on the subject.
The Wikipedia page explains how a frequentist can get the answer ⅓, but it doesn’t explain how a Bayesian can get that answer. That’s what’s missing.
I’m still hoping for a reference for “the Bayesian rules of forgetting”. If these rules exist, then we can check to see if they give the answer ⅓ in the Sleeping Beauty case. That would go a long way to convincing a naive Bayesian.
I do not think it is missing—since a Bayesian can ask themselves at what odds they would accept a bet on the coin coming up heads—just as easily as any other agent can.
What is missing is an account involving Bayesian forgetting. It’s missing because that is a way of solving the problem which makes little practical sense.
Now, it might be an interesting exercise to explore the rules of Bayesian forgetting—but I don’t think it can be claimed that that is needed to solve this problem—even from a Bayesian perspective. Bayesians have more tools available to them than just Bayes’ Law.
FWIW, Bayesian forgetting looks somewhat managable. Bayes’ Law is a reversible calculation—so you can just un-apply it.