If he shares all the data and priors with external observers, his posterior probability of MWI being true should end up the same as theirs.
No; I think you’re using the Aumann agreement theorem, which can’t be used in real life. It has many exceedingly unrealistic assumptions, including that all Bayesians agree completely on all definitions and all category judgements, and all their knowledge about the world (their partition functions) is mutual knowledge.
In particular, to deal with the quantum suicide problem, the reasoner has to use an indexical representation, meaning this is knowledge expressed by a proposition containing the term “me”, where me is defined as “the agent doing the reasoning”. A proposition that contains an indexical can’t be mutual knowledge. You can transform it into a different form in someone else’s brain that will have the same extensional meaning, but that person will not be able to derive the same conclusions from it, because some of their knowledge is also in indexical form.
(There’s a more basic problem with the Aumann agreement theorem—when it says, “To say that 1 knows that 2 knows E means that E includes all P2 in N2 that intersect P1,” that’s an incorrect usage of the word “knows”. 1 knows that E includes P1(w), and that E includes P2(w). 1 concludes that E includes P1 union P2, for some P2 that intersects P1. Not for all P2 that intersect P1. In other words, the theorem is mathematically correct, but semantically incorrect; because the things it’s talking about aren’t the things that the English gloss says it’s talking about.)
There are indeed many cases where Aumann’s agreement theorem seems to apply semantically, but in fact doesn’t apply mathematically. Would there be interest in a top-level post about how Aumann’s agreement theorem can be used in real life, centering mostly around learning from disagreements rather than forcing agreements?
I’d be interested, but I’ll probably disagree. I don’t think Aumann’s agreement theorem can ever be used in real life. There are several reasons, but the simplest is that it requires the people involved share the same partition function over possible worlds. If I recall correctly, this means that they have the same function describing how different observations would restrict the possible worlds they are in. This means that the proof assumes that these two rational agents would agree on the implications of any shared observation—which is almost equivalent to what it is trying to prove!
I don’t really see what is the problem with Aumann’s in that situation. If X commits suicide and Y watches, are there any factors (like P(MWI), or P(X dies|MWI)) that X and Y necessarily disagree on (or them agreeing would be completely unrealistic)?
but only joe can apply this rule. For jack, the rule doesn’t match the data. This means that joe and jack have different partition functions regarding the extensional observation survives(joe, X), which joe represents as survives(me, X).
If joe and jack both use an extensional representation, as the theorem would require, then neither joe nor jack can understand quantum immortality.
So you’re saying that the knowledge “I survive X with probability 1” can in no way be translated into objective rule without losing some information?
I assume the rules speak about subjective experience, not about “some Everett branch existing” (so if I flip a coin, P(I observe heads) = 0.5, not 1). (What do probabilities of possible, mutually exclusive outcomes of given action sum to in your system?)
Isn’t the translation a matter of applying conditional probability? i.e. (P(survives(me, X) = 1 ⇔ P(survives(joe, X) | joe’s experience continues = 1)
No; I think you’re using the Aumann agreement theorem, which can’t be used in real life. It has many exceedingly unrealistic assumptions, including that all Bayesians agree completely on all definitions and all category judgements, and all their knowledge about the world (their partition functions) is mutual knowledge.
In particular, to deal with the quantum suicide problem, the reasoner has to use an indexical representation, meaning this is knowledge expressed by a proposition containing the term “me”, where me is defined as “the agent doing the reasoning”. A proposition that contains an indexical can’t be mutual knowledge. You can transform it into a different form in someone else’s brain that will have the same extensional meaning, but that person will not be able to derive the same conclusions from it, because some of their knowledge is also in indexical form.
(There’s a more basic problem with the Aumann agreement theorem—when it says, “To say that 1 knows that 2 knows E means that E includes all P2 in N2 that intersect P1,” that’s an incorrect usage of the word “knows”. 1 knows that E includes P1(w), and that E includes P2(w). 1 concludes that E includes P1 union P2, for some P2 that intersects P1. Not for all P2 that intersect P1. In other words, the theorem is mathematically correct, but semantically incorrect; because the things it’s talking about aren’t the things that the English gloss says it’s talking about.)
There are indeed many cases where Aumann’s agreement theorem seems to apply semantically, but in fact doesn’t apply mathematically. Would there be interest in a top-level post about how Aumann’s agreement theorem can be used in real life, centering mostly around learning from disagreements rather than forcing agreements?
I’d be interested, but I’ll probably disagree. I don’t think Aumann’s agreement theorem can ever be used in real life. There are several reasons, but the simplest is that it requires the people involved share the same partition function over possible worlds. If I recall correctly, this means that they have the same function describing how different observations would restrict the possible worlds they are in. This means that the proof assumes that these two rational agents would agree on the implications of any shared observation—which is almost equivalent to what it is trying to prove!
I will include this in the post, if and when I can produce one I think is up to scratch.
What if you represented those disagreements over implications as coming from agents having different logical information?
I don’t really see what is the problem with Aumann’s in that situation. If X commits suicide and Y watches, are there any factors (like P(MWI), or P(X dies|MWI)) that X and Y necessarily disagree on (or them agreeing would be completely unrealistic)?
If joe tries and fails to commit suicide, joe will have the proposition (in SNActor-like syntax)
action(agent(me), act(suicide)) survives(me, suicide)
while jack will have the propositions
action(agent(joe), act(suicide)) survives(joe, suicide)
They both have a rule something like
MWI ⇒ for every X, act(X) ⇒ P(survives(me, X) = 1
but only joe can apply this rule. For jack, the rule doesn’t match the data. This means that joe and jack have different partition functions regarding the extensional observation survives(joe, X), which joe represents as survives(me, X).
If joe and jack both use an extensional representation, as the theorem would require, then neither joe nor jack can understand quantum immortality.
So you’re saying that the knowledge “I survive X with probability 1” can in no way be translated into objective rule without losing some information?
I assume the rules speak about subjective experience, not about “some Everett branch existing” (so if I flip a coin, P(I observe heads) = 0.5, not 1). (What do probabilities of possible, mutually exclusive outcomes of given action sum to in your system?)
Isn’t the translation a matter of applying conditional probability? i.e. (P(survives(me, X) = 1 ⇔ P(survives(joe, X) | joe’s experience continues = 1)