I don’t really see what is the problem with Aumann’s in that situation. If X commits suicide and Y watches, are there any factors (like P(MWI), or P(X dies|MWI)) that X and Y necessarily disagree on (or them agreeing would be completely unrealistic)?
but only joe can apply this rule. For jack, the rule doesn’t match the data. This means that joe and jack have different partition functions regarding the extensional observation survives(joe, X), which joe represents as survives(me, X).
If joe and jack both use an extensional representation, as the theorem would require, then neither joe nor jack can understand quantum immortality.
So you’re saying that the knowledge “I survive X with probability 1” can in no way be translated into objective rule without losing some information?
I assume the rules speak about subjective experience, not about “some Everett branch existing” (so if I flip a coin, P(I observe heads) = 0.5, not 1). (What do probabilities of possible, mutually exclusive outcomes of given action sum to in your system?)
Isn’t the translation a matter of applying conditional probability? i.e. (P(survives(me, X) = 1 ⇔ P(survives(joe, X) | joe’s experience continues = 1)
I don’t really see what is the problem with Aumann’s in that situation. If X commits suicide and Y watches, are there any factors (like P(MWI), or P(X dies|MWI)) that X and Y necessarily disagree on (or them agreeing would be completely unrealistic)?
If joe tries and fails to commit suicide, joe will have the proposition (in SNActor-like syntax)
action(agent(me), act(suicide)) survives(me, suicide)
while jack will have the propositions
action(agent(joe), act(suicide)) survives(joe, suicide)
They both have a rule something like
MWI ⇒ for every X, act(X) ⇒ P(survives(me, X) = 1
but only joe can apply this rule. For jack, the rule doesn’t match the data. This means that joe and jack have different partition functions regarding the extensional observation survives(joe, X), which joe represents as survives(me, X).
If joe and jack both use an extensional representation, as the theorem would require, then neither joe nor jack can understand quantum immortality.
So you’re saying that the knowledge “I survive X with probability 1” can in no way be translated into objective rule without losing some information?
I assume the rules speak about subjective experience, not about “some Everett branch existing” (so if I flip a coin, P(I observe heads) = 0.5, not 1). (What do probabilities of possible, mutually exclusive outcomes of given action sum to in your system?)
Isn’t the translation a matter of applying conditional probability? i.e. (P(survives(me, X) = 1 ⇔ P(survives(joe, X) | joe’s experience continues = 1)