You’re not obliged to give a lecture. A reference would be ideal.
Appealing to “forgetting” only gives an argument that our reasoning methods are incomplete: it doesn’t argue against ½ or in favour of ⅓. We need to see the rules and the calculation to decide if it settles the matter.
To reiterate, people do not need to know or understand the Bayesian rules of forgetting in order to successfully solve this problem. Nobody used this approach to solving the problem—as far as I am aware—but the vast majority obtained the correct answer nontheless. Correct reasoning is given on http://en.wikipedia.org/wiki/Sleeping_Beauty_problem—and in dozens of prior comments on the subject.
The Wikipedia page explains how a frequentist can get the answer ⅓, but it doesn’t explain how a Bayesian can get that answer. That’s what’s missing.
I’m still hoping for a reference for “the Bayesian rules of forgetting”. If these rules exist, then we can check to see if they give the answer ⅓ in the Sleeping Beauty case. That would go a long way to convincing a naive Bayesian.
I do not think it is missing—since a Bayesian can ask themselves at what odds they would accept a bet on the coin coming up heads—just as easily as any other agent can.
What is missing is an account involving Bayesian forgetting. It’s missing because that is a way of solving the problem which makes little practical sense.
Now, it might be an interesting exercise to explore the rules of Bayesian forgetting—but I don’t think it can be claimed that that is needed to solve this problem—even from a Bayesian perspective. Bayesians have more tools available to them than just Bayes’ Law.
FWIW, Bayesian forgetting looks somewhat managable. Bayes’ Law is a reversible calculation—so you can just un-apply it.
You’re not obliged to give a lecture. A reference would be ideal.
Appealing to “forgetting” only gives an argument that our reasoning methods are incomplete: it doesn’t argue against ½ or in favour of ⅓. We need to see the rules and the calculation to decide if it settles the matter.
To reiterate, people do not need to know or understand the Bayesian rules of forgetting in order to successfully solve this problem. Nobody used this approach to solving the problem—as far as I am aware—but the vast majority obtained the correct answer nontheless. Correct reasoning is given on http://en.wikipedia.org/wiki/Sleeping_Beauty_problem—and in dozens of prior comments on the subject.
The Wikipedia page explains how a frequentist can get the answer ⅓, but it doesn’t explain how a Bayesian can get that answer. That’s what’s missing.
I’m still hoping for a reference for “the Bayesian rules of forgetting”. If these rules exist, then we can check to see if they give the answer ⅓ in the Sleeping Beauty case. That would go a long way to convincing a naive Bayesian.
I do not think it is missing—since a Bayesian can ask themselves at what odds they would accept a bet on the coin coming up heads—just as easily as any other agent can.
What is missing is an account involving Bayesian forgetting. It’s missing because that is a way of solving the problem which makes little practical sense.
Now, it might be an interesting exercise to explore the rules of Bayesian forgetting—but I don’t think it can be claimed that that is needed to solve this problem—even from a Bayesian perspective. Bayesians have more tools available to them than just Bayes’ Law.
FWIW, Bayesian forgetting looks somewhat managable. Bayes’ Law is a reversible calculation—so you can just un-apply it.