Alice and Bob want to know which interpretation of quantum mechanics is true. They devise an experiment: Bob will enter a box, where a quantum random number generator kills him with 99.99% probability.
Conditional on Many-Worlds being true, Bob expects to survive with 100% probability. Conditional on Copenhagen being true, Bob expects to survive with only 0.01% probability. So if Bob exits the box alive, this is strong evidence for him in favor of Many-Worlds.
Alice on the other hand, calculates a 0.01% probability of seeing Bob exit the box alive, no matter which interpretation is true. Moreover, she knows that in the world where she sees Bob come out alive, Bob will be convinced that Many-Worlds is true, and this is again independent of which theory is really true.
As such, if Bob exits the box alive, Bob should update strongly in favor of Many-Worlds being true, and Alice should leave her prior probability unchanged, as she has gained no evidence in either direction.
But if Alice and Bob are fully rational Bayesian agents with the same priors, Aumann’s Agreement Theorem says that they should land on the same posterior probability for the truth of Many-Worlds after Bob exits the box. Any evidence that’s valid for Bob should be equally valid for Alice, and they shouldn’t be able to “agree to disagree” on what probability to assign to the Many-Worlds interpretation.
[Question] Quantum Suicide and Aumann’s Agreement Theorem
Alice and Bob want to know which interpretation of quantum mechanics is true. They devise an experiment: Bob will enter a box, where a quantum random number generator kills him with 99.99% probability.
Conditional on Many-Worlds being true, Bob expects to survive with 100% probability. Conditional on Copenhagen being true, Bob expects to survive with only 0.01% probability. So if Bob exits the box alive, this is strong evidence for him in favor of Many-Worlds.
Alice on the other hand, calculates a 0.01% probability of seeing Bob exit the box alive, no matter which interpretation is true. Moreover, she knows that in the world where she sees Bob come out alive, Bob will be convinced that Many-Worlds is true, and this is again independent of which theory is really true.
As such, if Bob exits the box alive, Bob should update strongly in favor of Many-Worlds being true, and Alice should leave her prior probability unchanged, as she has gained no evidence in either direction.
But if Alice and Bob are fully rational Bayesian agents with the same priors, Aumann’s Agreement Theorem says that they should land on the same posterior probability for the truth of Many-Worlds after Bob exits the box. Any evidence that’s valid for Bob should be equally valid for Alice, and they shouldn’t be able to “agree to disagree” on what probability to assign to the Many-Worlds interpretation.
What resolves this seeming contradiction?