I already addressed this elsewhere. The problem is that W is not a boolean, it’s a probability distribution over observer moments, so P(W) and P(~W) are undefined (type errors).
At one point in your post you said “For convenience let us say that the event W is being woken” and then later on you suggest W is something else, but I don’t see where you really defined it.
You’re saying W itself is a probability distribution. What probability distribution? Can you be specific?
P(H) and P(H|W) are probabilities. It’s unclear to me how those can be well defined, but the law of total probability doesn’t apply.
SleepingBeauty(S(I)) =
{
coin = rnd({"H","T"})
S("starting the experiment now")
if(coin=="H"):
S("you just woke up")
S("you just woke up")
else:
S("you just woke up")
S("the experiment's over now")
return 0
}
This notation is from decision theory; S is sleeping beauty’s chosen strategy, a function which takes as arguments all the observations, including memories, which sleeping beauty has access to at that point, and returns the value of any decision SB makes. (In this case, the scenario doesn’t actually do anything with SB’s answers, so the program ignores them.)
An observer-moment is a complete state of the program at a point where S is executed, including the arguments to S. Now, take all the possible observer-moments, weighted by the expected number of times that a given run of SleepingBeauty contains that observer moment. To condition on some information, take the subset of those observer-moments which match that information. So, P(coin=heads|I=”you just woke up”) means, of all the calls to S where I=”you just woke up”, weighted by probability of occurance, what fraction of them are on the heads branch? This is 1⁄3. On the other hand, P(coin=heads|I=”the experiment’s over now”)=1/2.
SleepingBeauty(S(I)) =
{
coin = rnd({"H","T"})
S("starting the experiment now")
if(coin=="H"):
S("you just woke up")
S("you just woke up")
else:
S("you just woke up")
S("the experiment's over now")
return 0
}
This notation is from decision theory; S is sleeping beauty’s chosen strategy, a function which takes as arguments all the observations, including memories, which sleeping beauty has access to at that point.
An observer-moment is a complete state of the program at a point where S is executed, including the arguments to S. Now, take all the possible observer-moments, weighted by the probability that a given run of SleepingBeauty contains that observer moment. To condition on some information, take the subset of those observer-moments which match that information. So, P(coin=heads|I=”you just woke up”) means, of all the calls to S where I=”you just woke up”, weighted by probability of occurance, what fraction of them are on the heads branch? This is 1⁄3. On the other hand, P(coin=heads|I=”the experiment’s over now”)=1/2.
If P(H) and P(H|W) are probabilities, then it must be true that:
P(H)=P(H|W)P(W)+P(H|~W)P(~W), where ~W means not W (any other event), by the law of total probability
If P(H)=1/2 and P(H|W)=1/3, as you claim, then we have
1/2=1/3P(W)+P(H|~W)(1-P(W))
P(H|~W) should be 0, since we know she will be awakened if heads. But that leads to P(W)=3/2.
P(W) should be 1, but that leads to an equation 1/2=1/3
So, this is a big mess.
The reason it is a big mess is because the 1⁄3 solution was derived by treating one random variable as two.
I already addressed this elsewhere. The problem is that W is not a boolean, it’s a probability distribution over observer moments, so P(W) and P(~W) are undefined (type errors).
At one point in your post you said “For convenience let us say that the event W is being woken” and then later on you suggest W is something else, but I don’t see where you really defined it.
You’re saying W itself is a probability distribution. What probability distribution? Can you be specific?
P(H) and P(H|W) are probabilities. It’s unclear to me how those can be well defined, but the law of total probability doesn’t apply.
Suppose we write out SB as a world-program:
This notation is from decision theory; S is sleeping beauty’s chosen strategy, a function which takes as arguments all the observations, including memories, which sleeping beauty has access to at that point, and returns the value of any decision SB makes. (In this case, the scenario doesn’t actually do anything with SB’s answers, so the program ignores them.)
An observer-moment is a complete state of the program at a point where S is executed, including the arguments to S. Now, take all the possible observer-moments, weighted by the expected number of times that a given run of SleepingBeauty contains that observer moment. To condition on some information, take the subset of those observer-moments which match that information. So, P(coin=heads|I=”you just woke up”) means, of all the calls to S where I=”you just woke up”, weighted by probability of occurance, what fraction of them are on the heads branch? This is 1⁄3. On the other hand, P(coin=heads|I=”the experiment’s over now”)=1/2.
Suppose we write out SB as a world-program:
This notation is from decision theory; S is sleeping beauty’s chosen strategy, a function which takes as arguments all the observations, including memories, which sleeping beauty has access to at that point.
An observer-moment is a complete state of the program at a point where S is executed, including the arguments to S. Now, take all the possible observer-moments, weighted by the probability that a given run of SleepingBeauty contains that observer moment. To condition on some information, take the subset of those observer-moments which match that information. So, P(coin=heads|I=”you just woke up”) means, of all the calls to S where I=”you just woke up”, weighted by probability of occurance, what fraction of them are on the heads branch? This is 1⁄3. On the other hand, P(coin=heads|I=”the experiment’s over now”)=1/2.