Beauty just knows that she’ll win the bet twice if tails landed. We double count for tails...
That doesn’t mean your credence for heads is 1 -- it just means I added a greater penalty to the other option.
You don’t need a monetary reward for this reasoning to work. It’s a funny ambiguity, I think, in what ‘credence’ means. Intuitively, a well-calibrated person A should assign a probability of P% to X iff X happens on P% of the occasions where A assigned a P% probability to X.
If we accept this, then clearly 1⁄3 is correct. If we run this experiment multiple times and Beauty guessed 1⁄3 for heads, then we’d find heads actually came up 1⁄3 of the times she said “1/3”. Therefore, a well-calibrated Beauty guesses “1/3″.
If we accept this, then clearly 1⁄3 is correct. If we run this experiment multiple times and Beauty guessed 1⁄3 for heads, then we’d find heads actually came up 1⁄3 of the times she said “1/3”. Therefore, a well-calibrated Beauty guesses “1/3″.
On the other hand...
Intuitively, a well-calibrated person A should assign a probability of P% to X iff X happens on P% of the occasions where A assigned a P% probability to X.
Here we’re still left with “occasions”. Should a well-calibrated person be right half of the times they are asked, or about half of the events? If (on many trials) Beauty guesses “tails” every time, then she’s correct 2⁄3 of the times she’s asked. However, she’s correct 1⁄2 of the times that the coin is flipped.
If I ask you for the probability of ‘heads’ on a fair coin, you’ll come up with something like ‘1/2’. If I ask you a million times before flipping, flip once, and it comes up tails, and then ask you once more before flipping, flip once, and it comes up heads, then you should not count that as a million cases of ‘tails’ being the correct answer and one of ‘heads’, even though a guess of ‘tails’ would have made you correct on a million occasions of being asked the question.
“What is your credence now for the proposition that our coin landed heads?”
No mention of “occasions”. Your comment doesn’t seem to be addressing that question, but some other ones, which are not mentioned in the problem description.
This explains why you can defend the “wrong” answer: you are not addressing the original question.
I did not claim that the problem statement used the word “occasions”.
Beauty should answer whatever probability she would answer if she was well-calibrated. So does a well-calibrated Beauty answer ‘1/2’ or 1⁄3′? Does Laplace let her into Heaven or not?
By the way, do you happen to remember the name or location of the article in which Eliezer proposed the idea of being graded for your beliefs (by Laplace or whoever), by something like cross-entropy or K-L divergence, such that if you ever said about something true that it had probability 0, you’d be infinitely wrong?
We know she will have the same credence on monday as she does on tuesday (if awakened), because of the amnesia. There is no reason to double count those. Under the experiment, you should think of there being one occasion under heads and one occasion under tails. From that perspective, a well-calibrated person A will assign 1⁄2 for heads. I think that is the correct way to view this problem. If there was a way for her to distinguish the days, things would be different.
We know she will have the same credence on monday as she does on tuesday
(if awakened), because of the amnesia. There is no reason to double
count those.
Well, she does say it twice. That seems like at least a potential
reason to count it as two answers.
You could say that 1⁄3 of the times the question is asked, the coin came
up heads. You could also say that 1⁄2 of the beauties are
asked about a coin that came up heads.
To me, this reinforces my doubt that probabilities and beliefs are the
same thing.
It illustrates fairly clearly how probabilities are defined in terms of
the payoff structure (which things will have payoffs assigned to them
and which things are considered “the same” for the purposes of assigning
payoffs).
I’ve felt for a while that probabilities are more tied to the payoff
structure than beliefs, and this discussion underlined that for me. I
guess you could say that using beliefs (instead of probabilities) to
make decisions is a heuristic that ignores, or at least downplays, the
payoff structure.
I agree that probabilities are defined through wagers. I also think beliefs (or really, degrees of belief) are defined through wagers. That’s the way Bayesian epistemologists usually define degree of belief. So I believe X will occur with P = .5 iff a wager on X and a wager on a fair coin flip are equally preferable to me.
I’m not sure I have an official position of Bayesian epistemology but I find the problem very confusing until you tell me what the payoff is. One might make an educated guess at the kind of payoff system the experiment designers would have had in mind—as many in the this thread have done. (ETA: actually, you probably have to weigh your answer according to your degree of belief in the interpretation you’ve chosen. Which is of course ridiculous. Lets just include the payoff scheme in the experiment.)
I agree that more information would help the beauty, but I’m more
interested in the issue of whether or not the question, as stated, is
ill-posed.
One of the Bayesian vs. frequentist examples that I found most
interesting was the case of the coin with unknown bias—a Bayesian
would say it has 50% chance of coming up heads, but a frequentist would
refuse to assign a probability. I was wondering if perhaps this is an
analogous case for Bayesians.
That wouldn’t necessarily mean anything is wrong with Bayesianism.
Everyone has to draw the line somewhere, and it’s good to know where.
I just flipped a coin. Are you willing to offer me a wager on the outcome I have already seen? Yet tradition would suggest you have a degree of belief in the most probable possibilities.
The offering of the wager itself can act as useful information. Some people wager to win.
I see what you mean. Yes, actual, literal, wagers are messier than beliefs. Another example is a bet that the world is going to end: which you should obviously always bet against at any odds even if you believe the last days are upon us. The equivalence between degree of belief and fair betting odds is a more abstract equivalence with an idealized bookie who offers bets on everything, doesn’t take a cut for himself and pays out even if you’re dead.
Actually, I like that metaphor! Let me work this out:
The bookie would see heads and tails with equal probability. However, the bookie would also sees twice as many bets when tails comes up. In order to make the vig zero, the bookie should pay out as much as comes in for whichever bet comes up, and that works out to 1:2 on heads and 2:1 on tails! Thus, the bookie sets the probability for Beauty at 1⁄3.
Another example is a bet that the world is going to end: which you should obviously always bet against at any odds even if you believe the last days are upon us.
To make an end of the world bet, person A who believes the world is not about to end will give some money to person B who believes the world is about to end. If after an agreed upon time, it is observed that the world has not ended, person B then gives a larger amount of money to person A.
It is harder to recover probabilities from the bets of this form that people are willing to make, because interest rates are a confounding factor.
Bets with money assume fairly constant and universal utility/$ rate. But that can’t be assumed in this case since money isn’t worth nearly as much if the world is about to end.
So you’d have to adjust for that. And of course even if you can figure out a fair wager given this issue it won’t be equivalent to the right degree of belief.
It is harder to recover probabilities from the bets of this form that people are willing to make, because interest rates are a confounding factor.
It isn’t that hard, is it? We just find the interest rate on the amount B got to begin with, right?
But if person B is right she only gets to enjoy the money until the world ends. It seems to me that money is less valuable when you can only derive utility from it for a small, finite period of time. You can’t get your money’s worth buying a house, for example. Plus if belief in the end of the world is widespread the economy will get distorted in a bunch of ways (in particular, the best ways to spend money with two weeks left to live would get really expensive) making it really hard to figure out what the fair bet would be.
? You can certainly estimate a probability—just like Wikipedia says.
Say you have a coin. You might estimate the probabiltiy of it coming down heads after a good flip on a flat horizontal surface as being 0.5. If you had more knowledge about the coin, you might then revise your estimate to be 0.497. You can consider your subjective probability to be an estimate of the probability that an expert might use.
You don’t seem to understand the concept of Bayesian probability. Subjective probability is not estimation of “real probability”, there is no “real probability”. When you revise subjective probability, it’s not because you found out how to approximate “real probability” better, it’s because you are following the logic of subjective probability.
You don’t seem to understand the concept of Bayesian probability.
Really? Someone who’s been posting around these parts for years, and your best hypothesis is “doesn’t understand Bayesian probability”? How would you rank it compared to “Someone hijacked your Lw account” or “I’m not understanding you” or “You said something that would have made sense except for a fairly improbable typo”?
Someone who’s been posting around these parts for years, and your best hypothesis is “doesn’t understand Bayesian probability”?
This seems a reasonable hypothesis specifically because it’s Tim Tyler. It would be much less probable for most other old-timers (another salient exception that comes to mind is Phil Goetz, though I don’t remember what he understands about probability in particular).
You seem to have to misattribute the phrase “real probability” to me in order to make this claim. What I actually said was “the probability that an expert might use”.
I recommend you exercise caution with those quote marks when attributing silly positions to me: some people might be misled into thinking you were actually quoting me—rather than attacking some nonsense of your own creation.
You don’t need a monetary reward for this reasoning to work. It’s a funny ambiguity, I think, in what ‘credence’ means. Intuitively, a well-calibrated person A should assign a probability of P% to X iff X happens on P% of the occasions where A assigned a P% probability to X.
If we accept this, then clearly 1⁄3 is correct. If we run this experiment multiple times and Beauty guessed 1⁄3 for heads, then we’d find heads actually came up 1⁄3 of the times she said “1/3”. Therefore, a well-calibrated Beauty guesses “1/3″.
On the other hand...
Here we’re still left with “occasions”. Should a well-calibrated person be right half of the times they are asked, or about half of the events? If (on many trials) Beauty guesses “tails” every time, then she’s correct 2⁄3 of the times she’s asked. However, she’s correct 1⁄2 of the times that the coin is flipped.
If I ask you for the probability of ‘heads’ on a fair coin, you’ll come up with something like ‘1/2’. If I ask you a million times before flipping, flip once, and it comes up tails, and then ask you once more before flipping, flip once, and it comes up heads, then you should not count that as a million cases of ‘tails’ being the correct answer and one of ‘heads’, even though a guess of ‘tails’ would have made you correct on a million occasions of being asked the question.
Well, the question was:
“What is your credence now for the proposition that our coin landed heads?”
No mention of “occasions”. Your comment doesn’t seem to be addressing that question, but some other ones, which are not mentioned in the problem description.
This explains why you can defend the “wrong” answer: you are not addressing the original question.
I did not claim that the problem statement used the word “occasions”.
Beauty should answer whatever probability she would answer if she was well-calibrated. So does a well-calibrated Beauty answer ‘1/2’ or 1⁄3′? Does Laplace let her into Heaven or not?
By the way, do you happen to remember the name or location of the article in which Eliezer proposed the idea of being graded for your beliefs (by Laplace or whoever), by something like cross-entropy or K-L divergence, such that if you ever said about something true that it had probability 0, you’d be infinitely wrong?
A Technical Explanation of Technical Explanation
What Nick said. Laplace is also mentioned jokingly in a different context in An Intuitive Explanation of Bayes’ Theorem.
Well, 1⁄3. I thought you were supposed to be defending the plausibility of the “1/2” answer here—not asking others which answer is right.
We know she will have the same credence on monday as she does on tuesday (if awakened), because of the amnesia. There is no reason to double count those. Under the experiment, you should think of there being one occasion under heads and one occasion under tails. From that perspective, a well-calibrated person A will assign 1⁄2 for heads. I think that is the correct way to view this problem. If there was a way for her to distinguish the days, things would be different.
Well, she does say it twice. That seems like at least a potential reason to count it as two answers.
You could say that 1⁄3 of the times the question is asked, the coin came up heads. You could also say that 1⁄2 of the beauties are asked about a coin that came up heads.
To me, this reinforces my doubt that probabilities and beliefs are the same thing.
EDIT: reworded for clarity
Why?
It illustrates fairly clearly how probabilities are defined in terms of the payoff structure (which things will have payoffs assigned to them and which things are considered “the same” for the purposes of assigning payoffs).
I’ve felt for a while that probabilities are more tied to the payoff structure than beliefs, and this discussion underlined that for me. I guess you could say that using beliefs (instead of probabilities) to make decisions is a heuristic that ignores, or at least downplays, the payoff structure.
I agree that probabilities are defined through wagers. I also think beliefs (or really, degrees of belief) are defined through wagers. That’s the way Bayesian epistemologists usually define degree of belief. So I believe X will occur with P = .5 iff a wager on X and a wager on a fair coin flip are equally preferable to me.
That’s fine. I guess I’m just not a Bayesian epistemologist.
If Sleeping Beauty is a Bayesian epistemologist, does that mean she refuses to answer the question as asked?
I’m not sure I have an official position of Bayesian epistemology but I find the problem very confusing until you tell me what the payoff is. One might make an educated guess at the kind of payoff system the experiment designers would have had in mind—as many in the this thread have done. (ETA: actually, you probably have to weigh your answer according to your degree of belief in the interpretation you’ve chosen. Which is of course ridiculous. Lets just include the payoff scheme in the experiment.)
I agree that more information would help the beauty, but I’m more interested in the issue of whether or not the question, as stated, is ill-posed.
One of the Bayesian vs. frequentist examples that I found most interesting was the case of the coin with unknown bias—a Bayesian would say it has 50% chance of coming up heads, but a frequentist would refuse to assign a probability. I was wondering if perhaps this is an analogous case for Bayesians.
That wouldn’t necessarily mean anything is wrong with Bayesianism. Everyone has to draw the line somewhere, and it’s good to know where.
I can understand that, but the fact that a wager has been offered distorts the probabilities under a lot of circumstances.
How do you mean?
I just flipped a coin. Are you willing to offer me a wager on the outcome I have already seen? Yet tradition would suggest you have a degree of belief in the most probable possibilities.
The offering of the wager itself can act as useful information. Some people wager to win.
I see what you mean. Yes, actual, literal, wagers are messier than beliefs. Another example is a bet that the world is going to end: which you should obviously always bet against at any odds even if you believe the last days are upon us. The equivalence between degree of belief and fair betting odds is a more abstract equivalence with an idealized bookie who offers bets on everything, doesn’t take a cut for himself and pays out even if you’re dead.
Actually, I like that metaphor! Let me work this out:
The bookie would see heads and tails with equal probability. However, the bookie would also sees twice as many bets when tails comes up. In order to make the vig zero, the bookie should pay out as much as comes in for whichever bet comes up, and that works out to 1:2 on heads and 2:1 on tails! Thus, the bookie sets the probability for Beauty at 1⁄3.
To make an end of the world bet, person A who believes the world is not about to end will give some money to person B who believes the world is about to end. If after an agreed upon time, it is observed that the world has not ended, person B then gives a larger amount of money to person A.
It is harder to recover probabilities from the bets of this form that people are willing to make, because interest rates are a confounding factor.
Bets with money assume fairly constant and universal utility/$ rate. But that can’t be assumed in this case since money isn’t worth nearly as much if the world is about to end.
So you’d have to adjust for that. And of course even if you can figure out a fair wager given this issue it won’t be equivalent to the right degree of belief.
It isn’t that hard, is it? We just find the interest rate on the amount B got to begin with, right?
But if person B is right she only gets to enjoy the money until the world ends. It seems to me that money is less valuable when you can only derive utility from it for a small, finite period of time. You can’t get your money’s worth buying a house, for example. Plus if belief in the end of the world is widespread the economy will get distorted in a bunch of ways (in particular, the best ways to spend money with two weeks left to live would get really expensive) making it really hard to figure out what the fair bet would be.
‘Credence’ is not probability.
It means: “subjective probabilty”:
“In probability theory, credence means a subjective estimate of probability, as in Bayesian probability.”
http://en.wikipedia.org/wiki/Credence
An estimate of a thing is not the same thing as that thing. And Bayesian probability is probability, not an estimate of probability.
Or—to put it another way—for a Bayesian their estimated probability is the same as their subjective probability.
The concept of “estimated probability” doesn’t make sense (in the way you use it).
? You can certainly estimate a probability—just like Wikipedia says.
Say you have a coin. You might estimate the probabiltiy of it coming down heads after a good flip on a flat horizontal surface as being 0.5. If you had more knowledge about the coin, you might then revise your estimate to be 0.497. You can consider your subjective probability to be an estimate of the probability that an expert might use.
You don’t seem to understand the concept of Bayesian probability. Subjective probability is not estimation of “real probability”, there is no “real probability”. When you revise subjective probability, it’s not because you found out how to approximate “real probability” better, it’s because you are following the logic of subjective probability.
Really? Someone who’s been posting around these parts for years, and your best hypothesis is “doesn’t understand Bayesian probability”? How would you rank it compared to “Someone hijacked your Lw account” or “I’m not understanding you” or “You said something that would have made sense except for a fairly improbable typo”?
This seems a reasonable hypothesis specifically because it’s Tim Tyler. It would be much less probable for most other old-timers (another salient exception that comes to mind is Phil Goetz, though I don’t remember what he understands about probability in particular).
You seem to have to misattribute the phrase “real probability” to me in order to make this claim. What I actually said was “the probability that an expert might use”.
I recommend you exercise caution with those quote marks when attributing silly positions to me: some people might be misled into thinking you were actually quoting me—rather than attacking some nonsense of your own creation.