I agree with the others about worrying about the decision theory before talking about probability theory that includes indexical uncertainty, but separately I think there’s an issue with your calculation.
“P(Beauty woken up at least once| heads)=P(Beauty woken up at least once | tails)=1”
Consider the case where a biased quantum coin is flipped and the people in ‘heads’ branches are awoken in green rooms while the ‘tails’ branches are awoken in red rooms.
Upon awakening, you should figure that the coin was probably biased to put you there. However, P(at least one version of you seeing this color room |heads) = P(at least one version of you seeing this color room |tails) = 1. The problem is that “at least 1” throws away information. p(I see this color|heads) != p(I see this color
tails). The fact that you’re there can be evidence that the ‘measure’ is bigger. The problem lies with this ‘measure’ thing, and seeing what counts for what kinds of decision problems.
The blue eyes problem is similar. Everyone knows that someone has blue eyes, and everyone knows that everyone knows that someone has blue eyes, yet “they gained no knew information because he only told them that at least one person has blue eyes!” doesn’t hold.
The fact that you’re there can be evidence that the ‘measure’ is bigger.
That “you are there” is evidence that the set of possible worlds consistent with your observations doesn’t include the worlds that don’t contain you, under the standard possible worlds sample space. Probabilistic measure is fixed in a model from the start and doesn’t depend on which events you’ve observed, only used to determine the measure of events. Also, you might care about what happens in the possible worlds that don’t contain you at all.
Probabilistic measure is fixed in a model from the start and doesn’t depend on which events you’ve observed...
But the amount of quantum measure in each color room depends on which biased coin was flipped, and your knowledge of the quantum measure can change based on the outcome.
I agree with the others about worrying about the decision theory before talking about probability theory that includes indexical uncertainty, but separately I think there’s an issue with your calculation.
“P(Beauty woken up at least once| heads)=P(Beauty woken up at least once | tails)=1”
Consider the case where a biased quantum coin is flipped and the people in ‘heads’ branches are awoken in green rooms while the ‘tails’ branches are awoken in red rooms.
Upon awakening, you should figure that the coin was probably biased to put you there. However, P(at least one version of you seeing this color room |heads) = P(at least one version of you seeing this color room |tails) = 1. The problem is that “at least 1” throws away information. p(I see this color|heads) != p(I see this color tails). The fact that you’re there can be evidence that the ‘measure’ is bigger. The problem lies with this ‘measure’ thing, and seeing what counts for what kinds of decision problems.
The blue eyes problem is similar. Everyone knows that someone has blue eyes, and everyone knows that everyone knows that someone has blue eyes, yet “they gained no knew information because he only told them that at least one person has blue eyes!” doesn’t hold.
That “you are there” is evidence that the set of possible worlds consistent with your observations doesn’t include the worlds that don’t contain you, under the standard possible worlds sample space. Probabilistic measure is fixed in a model from the start and doesn’t depend on which events you’ve observed, only used to determine the measure of events. Also, you might care about what happens in the possible worlds that don’t contain you at all.
But the amount of quantum measure in each color room depends on which biased coin was flipped, and your knowledge of the quantum measure can change based on the outcome.