That’s the problem—it shouldn’t really convince him. If he shares all the data and priors with external observers, his posterior probability of MWI being true should end up the same as theirs.
It’s not very different from surviving thousand classical Russian roulettes in a row.
ETA: If the chance of survival is p, then in both cases P(I survive) = p, P(I survive | I’m there to observe it) = 1. I think you should use the second one in appraising the MWI...
If he shares all the data and priors with external observers, his posterior probability of MWI being true should end up the same as theirs.
No; I think you’re using the Aumann agreement theorem, which can’t be used in real life. It has many exceedingly unrealistic assumptions, including that all Bayesians agree completely on all definitions and all category judgements, and all their knowledge about the world (their partition functions) is mutual knowledge.
In particular, to deal with the quantum suicide problem, the reasoner has to use an indexical representation, meaning this is knowledge expressed by a proposition containing the term “me”, where me is defined as “the agent doing the reasoning”. A proposition that contains an indexical can’t be mutual knowledge. You can transform it into a different form in someone else’s brain that will have the same extensional meaning, but that person will not be able to derive the same conclusions from it, because some of their knowledge is also in indexical form.
(There’s a more basic problem with the Aumann agreement theorem—when it says, “To say that 1 knows that 2 knows E means that E includes all P2 in N2 that intersect P1,” that’s an incorrect usage of the word “knows”. 1 knows that E includes P1(w), and that E includes P2(w). 1 concludes that E includes P1 union P2, for some P2 that intersects P1. Not for all P2 that intersect P1. In other words, the theorem is mathematically correct, but semantically incorrect; because the things it’s talking about aren’t the things that the English gloss says it’s talking about.)
There are indeed many cases where Aumann’s agreement theorem seems to apply semantically, but in fact doesn’t apply mathematically. Would there be interest in a top-level post about how Aumann’s agreement theorem can be used in real life, centering mostly around learning from disagreements rather than forcing agreements?
I’d be interested, but I’ll probably disagree. I don’t think Aumann’s agreement theorem can ever be used in real life. There are several reasons, but the simplest is that it requires the people involved share the same partition function over possible worlds. If I recall correctly, this means that they have the same function describing how different observations would restrict the possible worlds they are in. This means that the proof assumes that these two rational agents would agree on the implications of any shared observation—which is almost equivalent to what it is trying to prove!
I don’t really see what is the problem with Aumann’s in that situation. If X commits suicide and Y watches, are there any factors (like P(MWI), or P(X dies|MWI)) that X and Y necessarily disagree on (or them agreeing would be completely unrealistic)?
but only joe can apply this rule. For jack, the rule doesn’t match the data. This means that joe and jack have different partition functions regarding the extensional observation survives(joe, X), which joe represents as survives(me, X).
If joe and jack both use an extensional representation, as the theorem would require, then neither joe nor jack can understand quantum immortality.
So you’re saying that the knowledge “I survive X with probability 1” can in no way be translated into objective rule without losing some information?
I assume the rules speak about subjective experience, not about “some Everett branch existing” (so if I flip a coin, P(I observe heads) = 0.5, not 1). (What do probabilities of possible, mutually exclusive outcomes of given action sum to in your system?)
Isn’t the translation a matter of applying conditional probability? i.e. (P(survives(me, X) = 1 ⇔ P(survives(joe, X) | joe’s experience continues = 1)
I was actually going off the idea that the vast majority − 100% minus pr(survive all suicides) - of worlds would have the subject dead at some point, so all those worlds would not be convinced. Sure, people in your branch might believe you, but in (100 − 9.3x10^-302) percent of the branches, you aren’t there to prove that quantum suicide works. This means, I think, that the chance of you existing to prove that quantum suicide proves MWI to the rest of the world, the chance is equal to the chance of you surviving in a nonMWI universe.
I was going to say well, if you had a test with a 1% chance of confirming X and a 99% chance of disconfirming X, and you ran it a thousand times and made sure you presented only the confirmations, you would be laughed at to suggest that X is confirmed—but it is MWI that predicts every quantum event comes out every result, so only under MWI could you run the test a thousand times—so that would indeed be pretty convincing evidence that MWI is true.
Also: I only have a passing familiarity with Robin’s mangled worlds, but at the power of negative three hundred, it feels like a small enough ‘world’ to get absorbed into the mass of worlds where it works a few times and then they actually do die.
The problem I have with that is that from my perspective as an external observer it looks no different than someone flipping a coin (appropriately weighted) a thousand times and getting thousand heads. It’s quite improbable, but the fact that someone’s life depends on the coin shouldn’t make any difference for me—the universe doesn’t care.
Of course it also doesn’t convince me that the coin will fall heads for the 1001-st time.
(That’s only if I consider MWI and Copenhagen here. In reality after 1000 coin flips/suicides I would start to strongly suspect some alternative hypotheses. But even then it shouldn’t change my confidence of MWI relative to my confidence of Copenhagen).
If the chance of survival is p, then in both cases P(I survive) = p, P(I survive | I’m there to observe it) = 1.
Indeed, the anthropic principle explains the result of quantum suicide, whether or not you subscribe to the MWI. The real question is whether you ought to commit quantum suicide (and harness its anthropic superpowers for good). It’s a question of morality.
I would say quantum suiciding is not “harnessing its anthropic superpowers for good”, it’s just conveniently excluding yourself from the branches where your superpowers don’t work. So it has no more positive impact on the universe than you dying has.
That’s the problem—it shouldn’t really convince him. If he shares all the data and priors with external observers, his posterior probability of MWI being true should end up the same as theirs.
It’s not very different from surviving thousand classical Russian roulettes in a row.
ETA: If the chance of survival is p, then in both cases P(I survive) = p, P(I survive | I’m there to observe it) = 1. I think you should use the second one in appraising the MWI...
ETA2: Ok maybe not.
No; I think you’re using the Aumann agreement theorem, which can’t be used in real life. It has many exceedingly unrealistic assumptions, including that all Bayesians agree completely on all definitions and all category judgements, and all their knowledge about the world (their partition functions) is mutual knowledge.
In particular, to deal with the quantum suicide problem, the reasoner has to use an indexical representation, meaning this is knowledge expressed by a proposition containing the term “me”, where me is defined as “the agent doing the reasoning”. A proposition that contains an indexical can’t be mutual knowledge. You can transform it into a different form in someone else’s brain that will have the same extensional meaning, but that person will not be able to derive the same conclusions from it, because some of their knowledge is also in indexical form.
(There’s a more basic problem with the Aumann agreement theorem—when it says, “To say that 1 knows that 2 knows E means that E includes all P2 in N2 that intersect P1,” that’s an incorrect usage of the word “knows”. 1 knows that E includes P1(w), and that E includes P2(w). 1 concludes that E includes P1 union P2, for some P2 that intersects P1. Not for all P2 that intersect P1. In other words, the theorem is mathematically correct, but semantically incorrect; because the things it’s talking about aren’t the things that the English gloss says it’s talking about.)
There are indeed many cases where Aumann’s agreement theorem seems to apply semantically, but in fact doesn’t apply mathematically. Would there be interest in a top-level post about how Aumann’s agreement theorem can be used in real life, centering mostly around learning from disagreements rather than forcing agreements?
I’d be interested, but I’ll probably disagree. I don’t think Aumann’s agreement theorem can ever be used in real life. There are several reasons, but the simplest is that it requires the people involved share the same partition function over possible worlds. If I recall correctly, this means that they have the same function describing how different observations would restrict the possible worlds they are in. This means that the proof assumes that these two rational agents would agree on the implications of any shared observation—which is almost equivalent to what it is trying to prove!
I will include this in the post, if and when I can produce one I think is up to scratch.
What if you represented those disagreements over implications as coming from agents having different logical information?
I don’t really see what is the problem with Aumann’s in that situation. If X commits suicide and Y watches, are there any factors (like P(MWI), or P(X dies|MWI)) that X and Y necessarily disagree on (or them agreeing would be completely unrealistic)?
If joe tries and fails to commit suicide, joe will have the proposition (in SNActor-like syntax)
action(agent(me), act(suicide)) survives(me, suicide)
while jack will have the propositions
action(agent(joe), act(suicide)) survives(joe, suicide)
They both have a rule something like
MWI ⇒ for every X, act(X) ⇒ P(survives(me, X) = 1
but only joe can apply this rule. For jack, the rule doesn’t match the data. This means that joe and jack have different partition functions regarding the extensional observation survives(joe, X), which joe represents as survives(me, X).
If joe and jack both use an extensional representation, as the theorem would require, then neither joe nor jack can understand quantum immortality.
So you’re saying that the knowledge “I survive X with probability 1” can in no way be translated into objective rule without losing some information?
I assume the rules speak about subjective experience, not about “some Everett branch existing” (so if I flip a coin, P(I observe heads) = 0.5, not 1). (What do probabilities of possible, mutually exclusive outcomes of given action sum to in your system?)
Isn’t the translation a matter of applying conditional probability? i.e. (P(survives(me, X) = 1 ⇔ P(survives(joe, X) | joe’s experience continues = 1)
I was actually going off the idea that the vast majority − 100% minus pr(survive all suicides) - of worlds would have the subject dead at some point, so all those worlds would not be convinced. Sure, people in your branch might believe you, but in (100 − 9.3x10^-302) percent of the branches, you aren’t there to prove that quantum suicide works. This means, I think, that the chance of you existing to prove that quantum suicide proves MWI to the rest of the world, the chance is equal to the chance of you surviving in a nonMWI universe.
I was going to say well, if you had a test with a 1% chance of confirming X and a 99% chance of disconfirming X, and you ran it a thousand times and made sure you presented only the confirmations, you would be laughed at to suggest that X is confirmed—but it is MWI that predicts every quantum event comes out every result, so only under MWI could you run the test a thousand times—so that would indeed be pretty convincing evidence that MWI is true.
Also: I only have a passing familiarity with Robin’s mangled worlds, but at the power of negative three hundred, it feels like a small enough ‘world’ to get absorbed into the mass of worlds where it works a few times and then they actually do die.
The problem I have with that is that from my perspective as an external observer it looks no different than someone flipping a coin (appropriately weighted) a thousand times and getting thousand heads. It’s quite improbable, but the fact that someone’s life depends on the coin shouldn’t make any difference for me—the universe doesn’t care.
Of course it also doesn’t convince me that the coin will fall heads for the 1001-st time.
(That’s only if I consider MWI and Copenhagen here. In reality after 1000 coin flips/suicides I would start to strongly suspect some alternative hypotheses. But even then it shouldn’t change my confidence of MWI relative to my confidence of Copenhagen).
Indeed, the anthropic principle explains the result of quantum suicide, whether or not you subscribe to the MWI. The real question is whether you ought to commit quantum suicide (and harness its anthropic superpowers for good). It’s a question of morality.
I would say quantum suiciding is not “harnessing its anthropic superpowers for good”, it’s just conveniently excluding yourself from the branches where your superpowers don’t work. So it has no more positive impact on the universe than you dying has.
I think you are correct.
Related (somewhat): The Hero With A Thousand Chances.