1) Yes, the prior is the weighted average of posteriors. This is just the decomposition of P(A) into the sum over b of P(A|b)P(b). The rules applied to do this are the product rule and the mutual exclusivity and exhaustiveness of the different b.
Eliezer has a post on this called “conservation of expected evidence.”
2) True, though in anthropic problems this requires more than usual caution, because of the commonness of non-barking dogs (that is, places where you gain information even though no flashing signs pop up to make sure everyone knows you gained information).
In fact, I wrote the above sentence before looking at the blog post. And lo and behold, it’s relevant! Allen Downey says:
Whenever SB awakens, she has learned absolutely nothing she did not know Sunday night.
This is not so! But the information gained is what we sometimes call ‘indexical’ information—information about where, when, or who you are. When you wake up, the thing you learn is that you are now inside the experiment. That seems like a pretty important new thing to know.
I really like Downey’s train analogy. The trick, and the way to get ordinary Bayesian reasoning to work here, is to make sure to give different events their own probability—only when you treat the two local trains as two separate events (one way to do this is by setting aside two different labels for them), do you get the right answer. If you just say that P(express train)=1 and P(local train)=1 and stop there, you fail to capture some of your knowledge about the world. You have to say something like P(EXPR)=1, P(LOC1)=1, P(LOC2)=1, P(local|LOC1)=1, P(local|LOC2)=1 - you have to tell the math that being a local train is a property held by two different actual trains.
As for the claim about betting, let alone calling it a Fundamental Theorem, the entire point of the Sleeping Beauty problem is that the bet pays out to a different number of people than actually made the bet before the experiment. Depending on how this is expected to play out, different betting strategies can be right. If all actual transactions only occur at payoff time, though, it seems correct to only consider the situation then.
But the information gained is what we sometimes call ‘indexical’ information—information about where, when, or who you are. When you wake up, the thing you learn is that you are now inside the experiment. That seems like a pretty important new thing to know.
On this reasoning, it would seem, one could similarly argue that when Beauty awakes on
Monday (but before she is informed that it is Monday) she likewise gets relevant
evidence – centered evidence – about the future: namely that she is now in it.
Incidentally, I had a question on that paper, and now seems as good a time as any to bring it up. To quote the second-to-last paragraph (this will make no sense unless you’ve read it)
It is interesting that in Beauty and the Bookie, Beauty’s betting odds should
deviate from her credence assignment even though the bet that might be placed on
Tuesday would not result in any money switching hands. In a sense, the bet that Beauty
and the bookie would agree to on Tuesday is void. Nevertheless, it is essential that this
bet is included in the example. The bookie is unable to pursue the policy of only offering
bets on Monday since he does not know which day it is when he wakes up. If we changed
the example so that the bookie knew that is was Monday immediately upon awakening,
then Beauty and the bookie would no longer have the same relevant information, and the
Dutch book argument would fail. If instead we changed the example so that Beauty as
well as the bookie knew that it was Monday immediately upon awakening, then Beauty’s
credence in HEADS & MONDAY would be 1⁄2 throughout Monday, so again she would
avoid a Dutch book.
I didn’t really get how this would work. If she doesn’t lose anything on the second bet, then that’s effectively not a bet. How can losing nothing be part of her expected loss calculations?
1) Yes, the prior is the weighted average of posteriors. This is just the decomposition of P(A) into the sum over b of P(A|b)P(b). The rules applied to do this are the product rule and the mutual exclusivity and exhaustiveness of the different b.
Eliezer has a post on this called “conservation of expected evidence.”
2) True, though in anthropic problems this requires more than usual caution, because of the commonness of non-barking dogs (that is, places where you gain information even though no flashing signs pop up to make sure everyone knows you gained information).
In fact, I wrote the above sentence before looking at the blog post. And lo and behold, it’s relevant! Allen Downey says:
This is not so! But the information gained is what we sometimes call ‘indexical’ information—information about where, when, or who you are. When you wake up, the thing you learn is that you are now inside the experiment. That seems like a pretty important new thing to know.
I really like Downey’s train analogy. The trick, and the way to get ordinary Bayesian reasoning to work here, is to make sure to give different events their own probability—only when you treat the two local trains as two separate events (one way to do this is by setting aside two different labels for them), do you get the right answer. If you just say that P(express train)=1 and P(local train)=1 and stop there, you fail to capture some of your knowledge about the world. You have to say something like P(EXPR)=1, P(LOC1)=1, P(LOC2)=1, P(local|LOC1)=1, P(local|LOC2)=1 - you have to tell the math that being a local train is a property held by two different actual trains.
As for the claim about betting, let alone calling it a Fundamental Theorem, the entire point of the Sleeping Beauty problem is that the bet pays out to a different number of people than actually made the bet before the experiment. Depending on how this is expected to play out, different betting strategies can be right. If all actual transactions only occur at payoff time, though, it seems correct to only consider the situation then.
Exactly! To quote Bostrom
Incidentally, I had a question on that paper, and now seems as good a time as any to bring it up. To quote the second-to-last paragraph (this will make no sense unless you’ve read it)
I didn’t really get how this would work. If she doesn’t lose anything on the second bet, then that’s effectively not a bet. How can losing nothing be part of her expected loss calculations?