Dragging up anthropic questions and quantum immortality: suppose I am Schrodinger’s cat. I enter the box ten times (each time it has a .5 probability of killing me), and survive. If I started with a .5 belief in QI, my belief is now 1024/1025.
But if you are watching, your belief in QI should not change. (If QI is true, the only outcome I can observe is surviving, so P_me(I survive | QI) = 1. But someone else can observe my death even if QI is true, so P_you(I survive | QI) = 1/1024 = P_you(I survive | ~QI).)
Aumann’s agreement theorem says that if we share priors and have mutual knowledge of each others’ posteriors, we should each update to hold the same posteriors. Aumann doesn’t require that we share observations, but in this case we’re doing that too. So, what should we each end up believing? If you update in my direction, then every time anybody does something risky and survives, your belief in QI should go up. But if not, then I’m not allowed to update my belief in QI even if I survive the box once a day for a thousand years. Neither of those seems sensible.
Does Aumann make an implicit assumption that we agree on all possible values of P(evidence | model); if so, is that a safe assumption to make even in normal applications? (Granted, the assumptions of “common priors” and “Bayesian rationalists” are unsafe, so this might not cost much.)
I don’t think Aumann’s agreement theorem is the problem here.
If QI is true, the only outcome I can observe is surviving.
What does it mean for QI to be true or false? What would you expect to happen differently? Certainly, whether or not QI is true, the only outcome you can observe is surviving, so I don’t see how you’re updating your belief.
If QI is true, I expect to observe myself surviving. If QI is false, I expect not to be able to observe anything. I don’t know exactly what that means, but I don’t feel like this confusion is the problem. I think that surviving thousand-to-one odds must be strong evidence that I am somehow immortal (if you disagree, we can make it 3^^^^3-to-one), and QI is the only form of immortality that I currently assign non-neglible probability to.
I briefly thought that this made QI a somehow priveleged hypothesis, because I can’t observe the strongest evidence against it (my death). But I don’t think that’s the case, because there are other observations that would reduce my belief in QI. For example, if wavefunction collapse turns out to be a thing, I understand that would make QI much less likely. (But I don’t actually know quantum mechanics beyond Eliezer’s sequence, so the actual observations would be along the lines of “people who know QM saying that QI is incompatible with other observations that have been made, and appearing to know what they’re talking about”.)
If QI is true, you still don’t observe anything in 1023/1024 of all worlds. Nothing makes the 1-in-1024 event happen in any case, you just happen to only wake up in the situation where you legitimately get to be surprised about it happening.
If QI is true then my probability of observing myself survive is 1. That’s pretty much what QI is. It is true that most of my measure does not survive, but I don’t think it’s relevant in this case.
In 1023/1024 worlds your observer doesn’t update on QI, and neither do you. In 1/1024 worlds, you update on QI and so does the version of the person you interact with. ;)
The person watching me gives 1/1024 chance of my survival, regardless of whether QI is true or false. So if I survive, he does not update his belief in QI.
(That said, if I observed a 1/3^^^^3 probability, that might well increase my belief in MWI (I’m not sure if it should do, but it would be along the lines of “there’s no way I would have observed that unless all possible outcomes were observed by some part of my total measure”). And I’m not sure how MWI could be true but QI false, so it would also increase my belief in QI.
So maybe 1/1024 would do the same, but certainly not to anything like the same extent as personally surviving those odds.)
Dragging up anthropic questions and quantum immortality: suppose I am Schrodinger’s cat. I enter the box ten times (each time it has a .5 probability of killing me), and survive. If I started with a .5 belief in QI, my belief is now 1024/1025.
But if you are watching, your belief in QI should not change. (If QI is true, the only outcome I can observe is surviving, so P_me(I survive | QI) = 1. But someone else can observe my death even if QI is true, so P_you(I survive | QI) = 1/1024 = P_you(I survive | ~QI).)
Aumann’s agreement theorem says that if we share priors and have mutual knowledge of each others’ posteriors, we should each update to hold the same posteriors. Aumann doesn’t require that we share observations, but in this case we’re doing that too. So, what should we each end up believing? If you update in my direction, then every time anybody does something risky and survives, your belief in QI should go up. But if not, then I’m not allowed to update my belief in QI even if I survive the box once a day for a thousand years. Neither of those seems sensible.
Does Aumann make an implicit assumption that we agree on all possible values of P(evidence | model); if so, is that a safe assumption to make even in normal applications? (Granted, the assumptions of “common priors” and “Bayesian rationalists” are unsafe, so this might not cost much.)
I don’t think Aumann’s agreement theorem is the problem here.
What does it mean for QI to be true or false? What would you expect to happen differently? Certainly, whether or not QI is true, the only outcome you can observe is surviving, so I don’t see how you’re updating your belief.
If QI is true, I expect to observe myself surviving. If QI is false, I expect not to be able to observe anything. I don’t know exactly what that means, but I don’t feel like this confusion is the problem. I think that surviving thousand-to-one odds must be strong evidence that I am somehow immortal (if you disagree, we can make it 3^^^^3-to-one), and QI is the only form of immortality that I currently assign non-neglible probability to.
I briefly thought that this made QI a somehow priveleged hypothesis, because I can’t observe the strongest evidence against it (my death). But I don’t think that’s the case, because there are other observations that would reduce my belief in QI. For example, if wavefunction collapse turns out to be a thing, I understand that would make QI much less likely. (But I don’t actually know quantum mechanics beyond Eliezer’s sequence, so the actual observations would be along the lines of “people who know QM saying that QI is incompatible with other observations that have been made, and appearing to know what they’re talking about”.)
If QI is true, you still don’t observe anything in 1023/1024 of all worlds. Nothing makes the 1-in-1024 event happen in any case, you just happen to only wake up in the situation where you legitimately get to be surprised about it happening.
If QI is true then my probability of observing myself survive is 1. That’s pretty much what QI is. It is true that most of my measure does not survive, but I don’t think it’s relevant in this case.
In 1023/1024 worlds your observer doesn’t update on QI, and neither do you. In 1/1024 worlds, you update on QI and so does the version of the person you interact with. ;)
The person watching me gives 1/1024 chance of my survival, regardless of whether QI is true or false. So if I survive, he does not update his belief in QI.
(That said, if I observed a 1/3^^^^3 probability, that might well increase my belief in MWI (I’m not sure if it should do, but it would be along the lines of “there’s no way I would have observed that unless all possible outcomes were observed by some part of my total measure”). And I’m not sure how MWI could be true but QI false, so it would also increase my belief in QI.
So maybe 1/1024 would do the same, but certainly not to anything like the same extent as personally surviving those odds.)