While pondering Bayesian updates in the Sleeping Beauty Paradox, I came across a bizarre variant that features something like an anti-update.
In this variant as in the original, Sleeping Beauty is awakened on Monday regardless of the coin flip. On heads, she will be gently awakened and asked for her credence that the coin flipped heads. On tails, she will be instantly awakened with a mind-ray that also implants a false memory of having been awakened gently and asked for her credence that the coin flipped heads, and her answering. In both cases the interviewer then asks “are you sure?” She is aware of all these rules.
On heads, Sleeping Beauty awakens with certain knowledge that heads was flipped, because if tails was flipped then (according to the rules) she would have a memory that she doesn’t have. So she should answer “definitely heads”.
Immediately after she answers though, her experience is completely consistent with tails having been flipped, and when asked whether she is sure, should now answer that she is not sure anymore.
This seems deeply weird. Normally new experiences reduce the space of possibilities, and Bayesian updates rely on this. A possibility that previously had zero credence cannot gain non-zero credence through any Bayesian update. I am led to suspect that Bayesian updates cannot be an appropriate model for changing credence in situations where observers can converge in relevant observable state with other possible observers.
The weird thing is that the person doing the anti-update isn’t subjected to any “fiction”. It’s only a possibility that might have happened and didn’t.
Alternatively: The subject is show the result of the coin flip. A short time later, if the coin is heads, her memory is modified such that she remembers having seen tails.
Memory erasure can produce this sort of effect in a lot of situations. This is just a special case of that (I assume the false memory of a gentle waking also overwrites the true memory of the abrupt waking).
I deliberately wrote it so that there is no memory trickery or any other mental modification happening at all in the case where Sleeping Beauty updates from “definitely heads” to “hmm, could be tails”.
The bizarreness here is that all that is required is being aware of some probability that someone else, even in a counterfactual universe that she knows hasn’t happened, could come to share her future mental state without having her current mental state.
Yes, in the tails case memories that Sleeping Beauty might have between sleep and full wakefulness are removed, if the process allows her to have formed any. I was just implicitly assuming that ordinary memory formation would be suppressed during the (short) memory creation process.
While pondering Bayesian updates in the Sleeping Beauty Paradox, I came across a bizarre variant that features something like an anti-update.
In this variant as in the original, Sleeping Beauty is awakened on Monday regardless of the coin flip. On heads, she will be gently awakened and asked for her credence that the coin flipped heads. On tails, she will be instantly awakened with a mind-ray that also implants a false memory of having been awakened gently and asked for her credence that the coin flipped heads, and her answering. In both cases the interviewer then asks “are you sure?” She is aware of all these rules.
On heads, Sleeping Beauty awakens with certain knowledge that heads was flipped, because if tails was flipped then (according to the rules) she would have a memory that she doesn’t have. So she should answer “definitely heads”.
Immediately after she answers though, her experience is completely consistent with tails having been flipped, and when asked whether she is sure, should now answer that she is not sure anymore.
This seems deeply weird. Normally new experiences reduce the space of possibilities, and Bayesian updates rely on this. A possibility that previously had zero credence cannot gain non-zero credence through any Bayesian update. I am led to suspect that Bayesian updates cannot be an appropriate model for changing credence in situations where observers can converge in relevant observable state with other possible observers.
Fiction (including false memories) doesn’t follow the rules, and standard logic may not apply. This would feel weird because it IS weird.
The weird thing is that the person doing the anti-update isn’t subjected to any “fiction”. It’s only a possibility that might have happened and didn’t.
Alternatively: The subject is show the result of the coin flip. A short time later, if the coin is heads, her memory is modified such that she remembers having seen tails.
Memory erasure can produce this sort of effect in a lot of situations. This is just a special case of that (I assume the false memory of a gentle waking also overwrites the true memory of the abrupt waking).
I deliberately wrote it so that there is no memory trickery or any other mental modification happening at all in the case where Sleeping Beauty updates from “definitely heads” to “hmm, could be tails”.
The bizarreness here is that all that is required is being aware of some probability that someone else, even in a counterfactual universe that she knows hasn’t happened, could come to share her future mental state without having her current mental state.
Yes, in the tails case memories that Sleeping Beauty might have between sleep and full wakefulness are removed, if the process allows her to have formed any. I was just implicitly assuming that ordinary memory formation would be suppressed during the (short) memory creation process.