I recently read this blog post by Allen Downey in response to a reddit post in response to Julia Galef’s video about the Sleeping Beauty problem. Downey’s resolution boils down to a conjecture that optimal bets on lotteries should be based on one’s expected state of prior information just before the bet’s resolution, as opposed to one’s state of prior information at the time the bet is made.
I suspect that these two distributions are always identical. In fact, I think I remember reading in one of Jaynes’ papers about a requirement that any prior be invariant under the acquisition of new information. That is to say, the prior should be the weighted average of possible posteriors, where the weights are the likelihood that each posterior would be acheived after some measurement. But now I can’t find this reference anywhere, and I’m starting to doubt that I understood it correctly when I read it.
So I have two questions:
1) Is there such a thing as this invariance requirement? Does anyone have a reference? It seems intuitive that the prior should be equivalent to the weighted average of posteriors, since it must contain all of our prior knowledge about a system. What is this property actually called?
2) If it exists, is it a corollary that our prior distribution must remain unchanged unless we acquire new information?
The Sleeping Beauty problem and transformation invariances
I recently read this blog post by Allen Downey in response to a reddit post in response to Julia Galef’s video about the Sleeping Beauty problem. Downey’s resolution boils down to a conjecture that optimal bets on lotteries should be based on one’s expected state of prior information just before the bet’s resolution, as opposed to one’s state of prior information at the time the bet is made.
I suspect that these two distributions are always identical. In fact, I think I remember reading in one of Jaynes’ papers about a requirement that any prior be invariant under the acquisition of new information. That is to say, the prior should be the weighted average of possible posteriors, where the weights are the likelihood that each posterior would be acheived after some measurement. But now I can’t find this reference anywhere, and I’m starting to doubt that I understood it correctly when I read it.
So I have two questions:
1) Is there such a thing as this invariance requirement? Does anyone have a reference? It seems intuitive that the prior should be equivalent to the weighted average of posteriors, since it must contain all of our prior knowledge about a system. What is this property actually called?
2) If it exists, is it a corollary that our prior distribution must remain unchanged unless we acquire new information?