I’m not sure I have an official position of Bayesian epistemology but I find the problem very confusing until you tell me what the payoff is. One might make an educated guess at the kind of payoff system the experiment designers would have had in mind—as many in the this thread have done. (ETA: actually, you probably have to weigh your answer according to your degree of belief in the interpretation you’ve chosen. Which is of course ridiculous. Lets just include the payoff scheme in the experiment.)
I agree that more information would help the beauty, but I’m more
interested in the issue of whether or not the question, as stated, is
ill-posed.
One of the Bayesian vs. frequentist examples that I found most
interesting was the case of the coin with unknown bias—a Bayesian
would say it has 50% chance of coming up heads, but a frequentist would
refuse to assign a probability. I was wondering if perhaps this is an
analogous case for Bayesians.
That wouldn’t necessarily mean anything is wrong with Bayesianism.
Everyone has to draw the line somewhere, and it’s good to know where.
That’s fine. I guess I’m just not a Bayesian epistemologist.
If Sleeping Beauty is a Bayesian epistemologist, does that mean she refuses to answer the question as asked?
I’m not sure I have an official position of Bayesian epistemology but I find the problem very confusing until you tell me what the payoff is. One might make an educated guess at the kind of payoff system the experiment designers would have had in mind—as many in the this thread have done. (ETA: actually, you probably have to weigh your answer according to your degree of belief in the interpretation you’ve chosen. Which is of course ridiculous. Lets just include the payoff scheme in the experiment.)
I agree that more information would help the beauty, but I’m more interested in the issue of whether or not the question, as stated, is ill-posed.
One of the Bayesian vs. frequentist examples that I found most interesting was the case of the coin with unknown bias—a Bayesian would say it has 50% chance of coming up heads, but a frequentist would refuse to assign a probability. I was wondering if perhaps this is an analogous case for Bayesians.
That wouldn’t necessarily mean anything is wrong with Bayesianism. Everyone has to draw the line somewhere, and it’s good to know where.